Comment author: lukeprog 15 April 2018 01:13:08AM 0 points [-]

Yes, you may submit a writing sample by sending it to jobs@openphilanthropy.org, as FirstName.LastName.Sample (e.g. John.Smith.Sample.doc or John.Smith.Sample.pdf). If you'd like to submit a letter of recommendation, please include it as a page of your résumé.

Please keep in mind that writing samples and letters of recommendation are entirely optional, so if you don't already have them handy, I don't recommend spending time pulling them together. Our application process puts much more weight on work test performance anyway.

Comment author: Calvin_Baker 15 April 2018 01:00:23AM 0 points [-]

Is there any room in the application process for applicants to submit samples of original research or academic letters of recommendation?

Thank you!

Comment author: Denkenberger 14 April 2018 02:43:08PM 0 points [-]

Thanks for the very useful link. I think this means if you are one of those people who are okay with donating 50%, and if you donate to one of the smaller organizations that is funding constrained, it really would be high impact.

Comment author: JanBrauner 14 April 2018 12:19:40PM 0 points [-]

I really like that idea. It might also be useful to check whether this model would have predicted past changes of career recommendations.

Comment author: Dunja 14 April 2018 11:47:29AM 2 points [-]

Hi Richenda, great stuff, thanks for sharing the link! that's indeed a big impact and it's valuable to know for future events. It fits very well with what Evan and Jan have written below :)

Comment author: Dunja 14 April 2018 11:44:56AM *  0 points [-]

Thanks for this, Evan, I was primarily referring to smaller events which aren't primarily targeted at attracting new people. Though now that you mention it, I find the bigger events even worse haha! I was at one bigger EA event and while I perfectly understand it can introduce many people into the topic, and make people passionate about the cause, I haven't experienced the same mainly because I haven't learned much really. But this probably depends on personality traits, expectations etc. :) In general, your argument makes very much sense: if sufficiently many people are around, for some of them this will work (and the above post by Richenda shows there is even some empirical evidence for that). For me, forums like this are e.g. way more interesting ;) At the end of the day, it's probably the best if there is a variety of venues/platforms for different kind of people and interests.

Comment author: Julia_Wise  (EA Profile) 14 April 2018 11:43:52AM 7 points [-]

I took this to mean "even if you don't expect your choice to have economic impact, like your friend ordered the KFC bucket but doesn't want to finish it and asks if you'd like some, there are still other factors to consider like norm-setting and your own cognitive dissonance."

Comment author: MichaelPlant 14 April 2018 11:20:33AM 0 points [-]

Ah. So the EV is for a single year. But I still only see $1bn. So your number is "this is the cost per life year saved if we spend the money this year and it causes an instanteous reduction in X-risk for this year"?

So your figure is the cost effectiveness of reducing instanteous X-risk at Tn, where Tn is now, whenever now is. But it's not the cost effectiveness of that reduction at Tf, where Tf is some year in the future, because the further in the future this occurs, the less the EV is on PAA. If I'm wondering what the cost-effectiveness, from the perspective of T0, it would be to spend $1bn in 10 years and cause a reduction at T10, on your model I increase the mean age by 10 years to 48, the average cost per year become $12k. From the perspective of T10, reducing X-risk in the way you say at T10 is, again $9k.

By contrast, for totalists the calculations would be the same (excepting inflation, etc.).

Also, not sure why my comment was downvoted. I wasn't being rude (or, I think, stupid) and I think it's unhelpful to downvote without explanation as it just looks petty and feels unfriendly.

Comment author: KevinWatkinson  (EA Profile) 14 April 2018 08:45:41AM 0 points [-]

Thanks for the response, yes I was wondering about conformity in the sense of prevailing thinking within a particular cause area. Is there an expectation for talent to conform to prevailing thinking to a certain degree and would this then reinforce that idea of being talented, or could talent be more related to a set of core values or principles?

I think some cause areas seem to have fairly high expectations of conformity toward in-group / out-group identity, so if this is the case then talented people may conform or not (given the assumption that not all talented people would necessarily be in-group thinkers), but it seems to confer various advantages on those that do.

Comment author: Alex_Barry 14 April 2018 07:54:28AM *  0 points [-]

Yes, "switched" was a bit strong, I meant that by default people will assume a standard usage, so if you only reveal later that actually you are using a non-standard definition people will be surprised. I guess despite your response to Objection 2 I was unsure in this case whether you were arguing in terms of (what are at least to me) conventional definitions or not, and I had assumed you were.

To italicize works puts *s on either side, like *this* (when you are replying to a comment there is a 'show help' button that explains some of these things.)

Comment author: Paul_Christiano 14 April 2018 01:39:28AM 4 points [-]

In general I feel like donor lotteries should be preferred as a default over small donations to EA funds (winners can ultimately donate to EA funds if they decide that's the best option).

What are the best arguments in favor of EA funds as a recommendation over lotteries? Looking more normal?

(Currently there are no active lotteries, this is not a recommendation for short-term donations.)

Comment author: Evan_Gaensbauer 14 April 2018 12:25:25AM *  2 points [-]

On trust networks: These are very powerful and effective. YCombinator, for example, say they get most of their best companies via personal recommendation, and the top VCs say that the best way to get funded by them is an introduction by someone they trust.

(Btw I got an EA Grant last year I expect in large part because CEA knew me because I successfully ran an EAGx conference. I think the above argument is strong on its own but my guess is many folks around here would like me to mention this fact.)

That trust networks work well, and that according to your experience with the EA Grants there is an effective trust network within EA, just begs the question why trust networks within EA have failed to work for the EA Funds, since so little has been allocated from them.

Yes. If I were running EA grants I would continually be in contact with the community, finding out peoples project ideas, discussing it with them for 5 hours and getting to know them and how much I could trust them, and then handing out money as I saw fit. This is one of the biggest funding bottlenecks in the community. The place that seems most to have addressed them has actually been the winners of the donor lotteries, who seemed to take it seriously and use the personal information they had.

I haven’t even heard about EA grants this time around, which seems like a failure on all the obvious axes (including the one of letting grantees know that the EA community is a reliable source of funding that you can make multi-year plans around - this makes me mostly update toward EA grants being a one-off thing that I shouldn’t rely on).

FWIW, nothing I've heard about the EA Funds leads me to believe your impression is at all incorrect.

Comment author: Evan_Gaensbauer 14 April 2018 12:21:47AM 2 points [-]

Whether this discount rate is accurate is another question – given the relative abundance of cash available to EA orgs (through OpenPhil and Good Ventures), a rate as high as this is surprising.

I think the EA community has a limited data-set regarding availability of funding from EA orgs, and we try inferring more than we realistically can. OpenPhil's relationship to EA is rapidly changing, and OpenPhil is rapidly changing the EA movement. OpenPhil has established relationships with EA orgs of all majorly represented causes that as long as things keep going along an optimistic trajectory for them, they can expect up to 50% of their room for more funding per year to be filled by OpenPhil. OpenPhil has begun these relationships with EA orgs in the last year or two. Prior to that, OpenPhil as an organization was still finding its feet, and was more reticent about how large grants EA organizations might expect. I haven't worked at an EA org, but knowing people working at several, my sense is during giving season things are fraught for young(er) EA orgs who aren't sure if they'll have enough to keep the lights on for the next year. Knowing OpenPhil might clear anywhere between 0-50% of an org's budget doesn't do enough to reduce uncertainty. So it doesn't surprise me in spite of the apparent abundance of available funding from OpenPhil organizations rate donations today as worth so much more than the same amount donated a year from now.

Comment author: Gregory_Lewis 14 April 2018 12:07:15AM 0 points [-]

The EV in question is the reduction in x-risk for a single year, not across the century. I'll change the wording to make this clearer.

Comment author: Jeffhe  (EA Profile) 13 April 2018 11:51:40PM *  1 point [-]

I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.

By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched?

And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?

Comment author: Jeffhe  (EA Profile) 13 April 2018 11:43:01PM *  0 points [-]

Thanks for the exposition. I see the argument now.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.

Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.

My response:

JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn't seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don't find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).

Comment author: Evan_Gaensbauer 13 April 2018 11:27:42PM 2 points [-]

I'm referring to effective altruists who aren't (yet) veg-n but are considering becoming so, and are open-minded about but currently unconvinced by the argument veg-nism has a genuine economic impact.

Comment author: MichaelPlant 13 April 2018 11:18:07PM 0 points [-]

I agree it's really complicated, but merits some thinking. The one practical implication I take is "if 80k says I should be doing X, there's almost no chance X will be the best thing I could do by the time I'm in a position to do it"

Comment author: MichaelPlant 13 April 2018 11:14:10PM 1 point [-]

I think I'd go the other way and suggest people focus more on personal fit: i.e. do the thing in which you have greatest comparative advantage relative to the world as a whole, not just to the EA world.

Comment author: MichaelPlant 13 April 2018 11:04:39PM 1 point [-]

The economic impact of vegetarianism or veganism is only one factor in the decision of whether one should become a vegetarian or vegan, but an important one

I'm confused by this. If you genuinely think your purchase decisions will make no difference to what happens to animals, then you might as well go ahead and order the big bucket at KFC with a guiltless conscience.

View more: Prev | Next