In response to comment by DanielFilan  (EA Profile) on On Priors
Comment author: MichaelDickens  (EA Profile) 28 April 2016 03:59:23PM *  0 points [-]

Not sure what you mean by a 'combination distribution'

I mean that your prior probability density is given by $P(X) = w_{Pareto} P_{Pareto}(X) + w_{lognorm} P_{lognorm}(X)$ for weights $w$. (You can read LaTeX right?)

In response to comment by MichaelDickens  (EA Profile) on On Priors
Comment author: DanielFilan  (EA Profile) 30 April 2016 08:30:37PM *  0 points [-]

Sure. I think a better thing to do (which I think what Carl is suggesting) is to have a prior distribution over x (the effectiveness of a randomly chosen intervention), and interventionDistribution (a categorical distribution over different shapes you think the space of interventions might have). So P(x, 'Pareto') = P('Pareto') P(x | 'Pareto') = w_{Pareto} P_{Pareto}(x) and P(x, 'logNormal') = P('logNormal') P(x | 'logNormal') = w_{logNormal} P_{logNormal}(x). Then, for the first intervention you see, your prior density over effectiveness is indeed P(x) = w_{Pareto} P_{Pareto}(x) + w_{logNormal} P_{logNormal}(x), but after measuring a bunch of interventions, you can update your beliefs about the empirical distribution of effectivenesses.

In response to comment by Carl_Shulman on On Priors
Comment author: MichaelDickens  (EA Profile) 27 April 2016 02:48:17PM *  1 point [-]

Couple of important points you're making here.

On your first point, instead of using a single prior distribution I could do a weighted combination of multiple distributions. There are two ways to do this: either have a prior be a combination distribution, or compute multiple posteriors with different distributions and take their weighted average. Not sure which one correctly handles this uncertainty. I haven't done the math but I'd expect that either way, a formulation with distribution probabilities 90% log-normal/10% Pareto will give much more credence to high cost-effectiveness estimates than a pure log-normal. I don't believe it would change the results much to assign small probability to distributions with thinner tails than log-normal (e.g. normal or exponential).

On your second point, yeah I'm including some extra information in the prior, which is kinda wishy-washy. I realize this is suboptimal, but it's better than anything else I've come up with, and probably better than not using a quantitative model at all. Do you know a better way to handle this?

In response to comment by MichaelDickens  (EA Profile) on On Priors
Comment author: DanielFilan  (EA Profile) 28 April 2016 11:29:05AM *  0 points [-]

On your first point, instead of using a single prior distribution I could do a weighted combination of multiple distributions. There are two ways to do this: either have a prior be a combination distribution, or compute multiple posteriors with different distributions and take their weighted average. Not sure which one correctly handles this uncertainty.

Not sure what you mean by a 'combination distribution', but I think something like Carl's suggestion is correct: have a hierarchical model where the type of distribution over effectiveness that you will use is itself a random variable, which the distribution over effectiveness has as a 'hyperparameter'. You could also add a level to the hierarchy by having a distribution over the probabilities for each type of distribution. That being said, it might be convenient to fix these probabilities since it's difficult to put all the evidence you have access to in the model. Probabilistic programming languages are a convenient way to handle such hierarchical models, if you're interested, I recommend checking out this tutorial for an introduction focussing on applications in psychology.

Comment author: Tom_Ash  (EA Profile) 28 January 2016 10:34:48PM 1 point [-]

With 75% confidence I’d say that by February 10th at least 15 people will have expressed interest in predictions about effective altruism.

I hereby express interest. Others can do so in a comment under this!

Comment author: DanielFilan  (EA Profile) 07 February 2016 09:55:17AM 0 points [-]

Am interested.

Comment author: Gleb_T  (EA Profile) 02 February 2016 03:58:31AM 8 points [-]

I hear your concerns, and thank you for sharing them!

I think the issue of "making everything about altruism" is an important one to address. However, we seem to have different takes on how to go about this.

Let's take a bird's eye-view of our society. Currently, we have the consumer industry predominating our cultural space. The consumer industry creates a hedonistic treadmill around all aspects of our lives, including holidays. Valentine's Day is a classic example of a Hallmark Holiday, popularized by the consumer industry to inspire the population to buy stuff.

Now, I see our goal as trying to channel people's money into effective charity instead of consumerism. By comparison to the messages of consumerism out there, we're a tiny drop in the bucket. If we get even a bit more of our message out there, it would be a wonderful thing, I think. This article is an example of an effort to redirect a tiny proportion of that huge Valentine's Day spending into effective charities.

You postulate that for people who aren't EAs, this seems spammy. I would love it if that was the case! It would mean they were regularly exposed to such messages. From an effective giving marketing perspective, it would be a dream scenario. It's also incredibly unlikely to happen, given the current systemtic incentives.

Now, you might mean that it feels spammy to you. Might it be that you're more exposed to such messages than most people? I know that already a number of people indicated to me they will pursue this course of action. How much money has already been redirected toward effective charities because of that?

Here's some further evidence. Judging by the fact that this post got 500 FB likes the first day it was posted on The Life You Can Save Blog, which is followed by EAs and non-EAs, people are not finding it spammy. Note, the baseline for posts on TLYCS blog is about 100-200 likes over their lifetime, not the first day.

Here's another piece of evidence. It was just accepted for publication to The Plain Dealer, the 16th largest newspaper in the US. They would be highly unlikely to accept anything their audience would find as spammy.

Finally, regarding romance. This is something on which people will differ. If altruism doesn't float someone's romantic bubble, well cool - no pressure. For me, and potentially many others, it does. Regarding romance, I think of it as a feeling that I want to help the other person have a great life, be happy, and flourish, and a confidence that they want the same for me, with sex thrown in. That's perfectly compatible with altruism for me, but different people define romance differently :-P

Hope that helps relieve your concerns, and much appreciate you raising these issues!

Comment author: DanielFilan  (EA Profile) 02 February 2016 09:39:20AM *  5 points [-]

You postulate that for people who aren't EAs, this seems spammy. I would love it if that was the case! It would mean they were regularly exposed to such messages.

I don't think that "spammy" just means "messages that the viewer often sees". I can't really put into words what I think it does mean, but if someone had a post like this about how the best Valentine's Day gift was to donate to a fund that provided good architecture in cities, I would consider that spammy (unless it was really well-written, interesting, and not written by an organisation dedicated to promoting good architecture).

It was just accepted for publication to The Plain Dealer, the 16th largest newspaper in the US. They would be highly unlikely to accept anything their audience would find as spammy.

This is evidence, but my intuition is that it isn't very strong. I know that some of the largest newspapers in Australia print things which I would think of as low-quality and bordering on spammy. I also find it plausible that the 16th largest newspaper in the US might occasionally have trouble getting content, and would have to accept unusually low quality content.

That being said, I also think it's probable that different people have different criteria for what strikes them as spammy, and that there's a significant proportion of people to whom this isn't spammy.

Comment author: DanielFilan  (EA Profile) 25 January 2016 12:03:53PM *  6 points [-]

I think that the point about veganism doesn't follow from the rest of your piece, and could well be true depending on the object-level details. In the case of weak convergence, no point will maximise both X and Y, but a set of points may well contain the maxima of both X and Y - in fact, if the set of points consists of the upper-right end of the 'ellipse', it could be that no other point does much better than anything in the set on X or Y. Therefore, even if the best intervention for X isn't the best intervention for Y, it will be the case that the set of great interventions for X is almost the same as the set of great interventions for Y if X and Y are weakly correlated. Tying this back to veganism, it could well be that the set of vegan diets contain the best diet for animal suffering, the best diet for environmental impact, the best diet for tastiness, etc, and that vegan diets are better than all other diets on these metrics. Although on the object-level it seems unlikely that minimising animal suffering is correlated with tastiness in diets, it seems plausible enough that it is correlated with environmental impact.

Comment author: Gleb_T  (EA Profile) 24 January 2016 08:37:57PM 1 point [-]

Another datapoint going against the theory is this post encouraging running fundraisers for weird charities.

Your own post about CS majors goes in the "weird" category for me and got plenty of upvotes.

Comment author: DanielFilan  (EA Profile) 24 January 2016 08:56:32PM 2 points [-]

I wouldn't expect my "running fundraisers for weird charities" post to be seen as weird by the standards of most EAs that I know, make me look like a jerk, or make any readers feel bad about themselves.

Comment author: DanielFilan  (EA Profile) 15 January 2016 07:16:11PM *  0 points [-]

One other metric that you could look at is proportion of people contacted who donate. This would be evidence of some kind of weirdness effect, but at the end of the day, $/person contacted is closer to what we care about. Still, I think it's interesting to look at.

My proportion is 10/42=24%, Giles' is 26/63=41%, and Peter's is >33/149=22%. I don't know that I would draw any conclusions from this, except that it makes the very similar $/person contacted numbers look much more coincidental.

[edit: rewording]

Comment author: Peter_Hurford  (EA Profile) 15 January 2016 01:21:54PM 1 point [-]

Congrats on raising a successful fundraiser. I do suspect Ben Todd is right that people give based on the personal connection regardless of the actual charity, as long as it can be plausibly spun in a good way.

-

One thing I'd flag though is that your key piece of evidence is:

they both raised US$14.3/person contacted, while I raised US$14.0/person contacted

...but this number is sensitive to the number of people you contacted, because there's diminishing marginal returns to contacting more people.

In particular, a better number would be to compare the amount you raised to the median amount raised by other fundraisers who also contacted their friends individually. Unfortunately I don't have that number (yet).

-

Most of the money came from a few people who I would call semi-EAs – my impression was that they sort of knew about EA ideas, but that they weren't part of the community

This is another aspect that could skew things. Many fundraisers don't have access to a bunch of semi-EAs and it's possible semi-EAs might go more for REG or other weird charities than others. It's possible that you may have gotten more non-EAs to donate if the charity were AMF.

-

if you think that donations to a weird charity are at least twice as valuable as donations to the best normal charity, and you want to run a birthday/Christmas fundraiser, it seems worth it to fundraise for the weird charity despite the possibility of eliciting lower donations per person contacted.

I think this is the best evidence -- if you do think your charity is much better, then it does offset a large decline in total money fundraised! And I agree that the drop-off is likely not that high.

Comment author: DanielFilan  (EA Profile) 15 January 2016 06:02:55PM *  0 points [-]

they both raised US$14.3/person contacted, while I raised US$14.0/person contacted

...but this number is sensitive to the number of people you contacted, because there's diminishing marginal returns to contacting more people.

I think that this is a problem, but not necessarily as big a problem as you think it is. The two AMF fundraisers had very different numbers of people contacted (63 vs 149), and still had almost identical funding elicited per person contacted. My guess would be that the likelihood of someone donating is closely related to how well you know the person, and that that would be why additional people contacted would be less valuable. If this is right, and I just know fewer people than you do, then it could be that my marginal contactee was just as close to me as your marginal contactee is to you.

In particular, a better number would be to compare the amount you raised to the median amount raised by other fundraisers who also contacted their friends individually. Unfortunately I don't have that number (yet).

Do you mean comparing my amount raised to the median amount raised by other fundraisers who contacted about as many people as I did? The problem with that is that it wouldn't account for variation in how many people I'm close with. I'm not really sure how to get rid of this factor, except by giving many people the same instructions about what sort of person to contact, and seeing how well they do fundraising for REG and AMF.

Most of the money came from a few people who I would call semi-EAs – my impression was that they sort of knew about EA ideas, but that they weren't part of the community

This is another aspect that could skew things. Many fundraisers don't have access to a bunch of semi-EAs and it's possible semi-EAs might go more for REG or other weird charities than others. It's possible that you may have gotten more non-EAs to donate if the charity were AMF.

This is also an important factor. Actually, since the EA outreach pipeline tends to start by talking about effective global poverty/health interventions, my guess is that semi-EAs might also be more enthusiastic about AMF than REG, and that I just know atypical semi-EAs. In general, details about the friend group seem like they will impact how well fundraisers go.

If you do think your charity is much better, then it does offset a large decline in total money fundraised! And I agree that the drop-off is likely not that high.

Yeah, this is really the most important argument in favour of weird charity fundraisers. I see a lot of room for arguments (like yours) for something on the order of a 10% dropoff in money raised, but I think that this is good evidence against a >50% dropoff (which I think was a priori plausible).

14

On running fundraisers for weird charities

It's reasonably common in the EA community for people to run Christmas or birthday fundraisers for EA charities. Usually, these seem to be for charities like AMF – partially because Charity Science has great infrastructure for running Christmas/birthday fundraisers for GiveWell recommended charities, and partially because a large fraction of... Read More
Comment author: Robert_Wiblin 23 December 2015 12:13:03AM 2 points [-]

Whatever the problems with the total view, a straight average view is a completely non-starter.

I mean, the sadistic conclusions removes any intuitive appeal immediately.

Comment author: DanielFilan  (EA Profile) 23 December 2015 08:58:21AM *  1 point [-]

the sadistic conclusions removes any intuitive appeal immediately.

Note that some clever people disagree with this (http://blog.practicalethics.ox.ac.uk/2014/02/embracing-the-sadistic-conclusion/):

This is not the post I was planning to write. Originally, it was going to be a heroic post where I showed my devotion to philosophical principles by reluctantly but fearlessly biting the bullet on the sadistic conclusion. Except… it turns out to be nothing like that, because the sadistic conclusion is practically void of content and embracing it is trivial. [emphasis mine]

View more: Next