Comment author: Peter_Hurford  (EA Profile) 16 December 2017 07:40:07PM 0 points [-]

Is there a recap of what happened in last year's lottery?

Comment author: Carl_Shulman 16 December 2017 08:28:25PM 2 points [-]

See the post for that lottery: Tim Telleen-Lawton (former GiveWell employee) won. He is planning to post about his donations soon.

Do note, though, that the odds of payouts for other participants are unaffected by anyone's particular participation in this lottery design. For you, it doesn't matter what other participants plan to do except insofar as you would change your donation plans upon hearing them make a similar donation outside the lottery context. [I mention this because there is a common misconception to the contrary.]

Comment author: ThomasSittler 16 December 2017 11:17:56AM 1 point [-]

This is really great, I like the idea of CEA building online infrastructure for the community. The explanation of the lottery is pretty good and you've clearly put a lot of effort into it. It is still not perfectly clear in parts.

The draw date is the time at which the lottery will be drawn. The UNIX timestamp of this date will be used to select the NIST beacon entry from which the winning lottery number is derived.

An explanation of NIST beacon entry would be nice here.

The block size is guaranteed by a benefactor who agrees to ‘backstop’ the pot up to this amount.

I'm assuming from context that this means Paul will pay the whole 100,000 even if less than 100,000 is donated by individual entrants in the lottery. More explanation would be nice. [OK, this becomes clearer later in the text, when you say "Unallocated number ranges will be assigned as a ‘ticket’ to the lottery guarantor". But the flow is not smooth]

It's also not entirely clear to me how block overflow is handled. Do you have a second guarantor waiting in the wings for the second block?

Comment author: Carl_Shulman 16 December 2017 05:40:38PM 1 point [-]

It's also not entirely clear to me how block overflow is handled. Do you have a second guarantor waiting in the wings for the second block?

If you have overflow, that means you have more than the amount of the first block coming in, so you only need the guarantor for the last block. E.g. if people enter with $150,000 collectively, then $100,000 of that goes to pay out for the first block (which will definitely pay out because all of its numbers are allocated). In the 50% chance that the second block is won, then the guarantor will add $50,000 to pay out the second block (and otherwise the guarantor will allocate the funds, basically the guarantor is just committing to take all the unclaimed tickets on the last block).

Comment author: SamDeere 16 December 2017 07:46:43AM 0 points [-]

Yes, we can take donations in cryptocurrency (it's worth noting that donating appreciated assets can have tax advantages over converting and donating in fiat). We're in the process of figuring out a solution that allows you to do this directly via the website, but for now if you want to donate in crypto please email lottery[at]effectivealtruism[dot]org and we can discuss

Comment author: Carl_Shulman 16 December 2017 05:38:36PM 1 point [-]

Maybe mention that on the site? There are a lot of crypto donations happening now.

Comment author: itaibn 14 December 2017 06:19:31PM 2 points [-]

Indeed, maybe I should made the point more harshly. To be clear, that comment is not about something people might do, it's about what's already present in the top post, which I see as breaking the Reddit rules.

I used soft language because I was worried about EA discussions breaking into arguments whenever someone suggests a good thing to do, and was worried that I might have erred too much in the other direction in other contexts. I still don't feel I have a good intuition on how confrontational I should be.

Comment author: Carl_Shulman 14 December 2017 06:55:11PM *  8 points [-]

I think it was an understandable first thought for someone who didn't know those rules, and Dony shouldn't be castigated for not knowing about them in a useful post about an important topic. But I think we should be definite about not violating the rules (e.g. by editing the post) now that everyone involved knows about them, while pursuing Dony's other good ideas.

Comment author: Carl_Shulman 14 December 2017 06:17:50PM *  5 points [-]

Buy and email them a copy of books like Doing Good Better (comment if you’ve done that so there’s no duplication of effort).

I like this one, and just sent Doing Good Better and Superintelligence with some explanations to their email.

Comment author: itaibn 14 December 2017 04:54:50PM 6 points [-]

I've spent some time thinking and investigating what the current state of affairs is, and here's my conclusions:

I've been reading through PineappleFund's comments. Many are responses to solicitations for specific charities with him endorsing them as possibilities. One of these was for SENS foundation. Matthew_Barnett suggested that this is evidence that he particularly cares about long-term future causes, but given the diversity of other causes he endorsed I think it is pretty weak evidence.

They haven't yet commented on any of the subthreads specifically discussing EA. However, these subthreads are high up on the Reddit sorting algorithm and have many comments endorsing EA. This is already a good position and is difficult to improve: They either like what they see or they don't. It may be better if the top-level comments explicitly described and linked to a specific charity since that is what they responded well to in other comments, but I am cautious about making such surface-level generalizations which might have more to do with the distribution of existing comments than PineappleFund's tendencies.

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

Comment author: Carl_Shulman 14 December 2017 05:41:40PM *  12 points [-]

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

I'd say our policy should be 'just don't do that.' EA has learned its lesson on this from GiveWell.

Also:

Integrity:

Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive. We also value the reputation of effective altruism, and recognize that our actions reflect on it.

Comment author: Carl_Shulman 14 December 2017 03:05:09AM *  15 points [-]

organizations working outside these areas, such as those working on existential risk and far future. My impression, however, is that OpenPhil has done a good job filling up the funding gaps in this area and that there are very few organizations that would meet the criteria I’m using for these recommendations.

[Disclaimers: speaking only for myself, although I do some work for Open Phil.]

I think that that many EAs are overestimating the degree to which this funding changes the marginal returns of individual donations, for a few reasons:

  • In a number of these cases the Open Philanthropy Project grants discuss intentions to take up a percentage of the grantee's budget, and a preference not to exceed half of it; desire to avoid single-donor funding issues creates opportunities for small donors, as I discussed in this post
  • If a large donor limits itself to half of the grantee's budget, then not only is there 'room for more funding' left for other donors, but it also implicitly acts as a delayed counterfactual 1:1 matching grant, as each small donor dollar allows for another large donor dollar (less the opportunity cost of Open Philanthropy's 'last dollar' but insofar as one isn't just topping up Open Philanthropy's reserves then presumably one aims to do better than that), which could largely offset diminishing returns for the marginal donor
  • Where 'room for more funding' suggests a steep cliff of diminishing returns, in reality diminishing returns are normally much smoother, as additional funds enable reserves, marginal expenditures, openness to and pursuit of additional expansion, etc; see the linked articles by Max Dalton and Owen Cotton-Barratt
  • Concretely, I think small donors could 'top up' many of the AI grants in the Open Philanthropy grant database and get marginal cost effectiveness within a factor of 2-4 of the average cost-effectiveness of the dollars in the relevant grant
  • In cases where the topping up would work better with larger amounts (e.g. $100,000 or $500,000) because of transaction costs (e.g. working with academic labs, or asking for advice on how to do it), small donors can make use of a donor lottery to convert their donation into a 1/n chance of a donation n times as great for which the transaction costs are manageable

In my view the larger shift induced by Open Philanthropy is that the returns to using one's labor, knowledge, and other resources to create opportunities that it will find competitive have gone up (since they are more likely to be able to grow later if successful). That is a boost for several of the organizations you mention, but can also apply to larger organizations whose activity tends to produce those opportunities through other channels than being a new organization (e.g. by building pipelines for new scientists or activists, research that better prioritizes options, demonstrations that technical projects can make progress).

So I don't think that the arguments in the post are sufficient to establish this:

while I think some organizations may be more impactful per dollar overall, the marginal donation is not as useful as they are highly likely to have been able to fundraise it already with much less effort and there is less at risk (e.g., whether a program happens at all versus whether it is scaled up further).

I agree that CSH looks attractive for a donor who would otherwise give to AMF, that WASR and SI make sense for a donor who might otherwise give to The Humane League (as demonstrated by, e.g. Lewis' EA Funds grants), and that providing access to donation methods for Canadian donors could pay for itself for those donors (with some caveats about distributional details, and due diligence).

However, I don't think that increased Open Philanthropy funding provides adequate reason to dismiss the cause area of existential risk reduction for marginal funds (and in fact my own view is that the most attractive marginal opportunities lie in that area, directly or indirectly).

Comment author: Lila 04 November 2017 10:51:18PM 3 points [-]

The p-value critique doesn't apply to many scientific fields. As far as I can tell, it mostly applies to social science and maybe epidemiological research. In basic biological research, a paper wouldn't be published in a good journal on the basis of a single p-value. In fact, many papers don't have any p-values. When p-values are presented, they're often so low (10^-15) that they're unnecessary confirmations of a clearly visible effect. (Silly, in my opinion.) Most papers rely on many experiments, which ideally provide multiple lines of evidence. It's also common to propose a mechanism that's plausible given the existing literature. In some cases, you can see the fingerprints of skeptical reviewers. For example, when I see "to exclude the possibility that", I assume that this experiment was added later at the demand of a reviewer. Published biology is often wrong, but for subtler reasons.

Comment author: Carl_Shulman 05 November 2017 05:11:17AM *  13 points [-]

"The p-value critique doesn't apply to many scientific fields." I agree with this, or at least that it is vastly weaker when overwhelming data are available to pin down results.

"As far as I can tell, it mostly applies to social science and maybe epidemiological research. "

I disagree with this.

For instance, p-value issues have been catastrophic in quantitative genetics. The vast bulk of candidate gene research in genetics was non-replicable p-hacking of radically underpowered studies. E.g. schizophrenia candidate genes replicate at chance levels in massive replications but had literatures of p-hacked and publication bias artifact studies. The field moved to requiring genome-wide significance of 5*10^-8 (i.e. Bonferroni corrections for multiple testing at all measured variants). Results obtained in huge genome-wide association studies that meet that criterion replicate reliably.

ETA: It isn't basic biological research, but medical and drug trials routinely have severe p-hacking issues. And there have been a lot of reproducibility problems reported with, e.g. preclinical cancer research, often lacking slam dunk evidence. The Reproducibility Project: Cancer is working on that.

Medical studies take up the bulk of biomedical research funds, and Eliezer's example is at the intersection of medicine and nutrition.

ETA2: I don't think issues of p-hacking would be solved just by using Bayesian statistics: people can instead selectively report Bayes factors, i.e. posterior hacking. It's the selective use of analytic and reporting degrees of freedom that's central. Here's Daryl Bem and coauthors' Bayesian meta-analysis purporting to show psi in Bem's p-hacked experiments.

Comment author: Pablo_Stafforini 31 October 2017 08:59:50PM *  1 point [-]

In that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero.

I offered that quote to cast doubt on Rob's assertion that Eliezer has "a really strong epistemic track record, and that this is good evidence that modesty is a bad idea." I didn't mean to deny that Eliezer had some successes, or that one shouldn't "look closely and form much better estimates of the likelihood of good invisible reasons" or that "the base rate of dysfunction is anywhere near zero", and I didn't offer the quote to dispute those claims.

Readers can read the original comment and judge for themselves whether the quote was in fact pulled out of context.

Comment author: Carl_Shulman 31 October 2017 09:18:57PM 1 point [-]

Please take my comment as explaining my own views, lest they be misunderstood, not condemning your citation of me.

Comment author: Pablo_Stafforini 31 October 2017 06:41:17PM *  2 points [-]

I would say that Eliezer and his social circle have a really strong epistemic track record, and that this is good evidence that modesty is a bad idea; but I gather you want to use that track record as Exhibit A in the case for modesty being a good idea.

Really? My sense is that the opposite is the case. Eliezer himself acknowledges that he has an "amazing bet-losing capability" and my sense is that he tends to bet against scientific consensus (while Caplan, who almost always takes the consensus view, has won virtually all his bets). Carl Shulman notes that Eliezer's approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

Comment author: Carl_Shulman 31 October 2017 07:55:26PM *  7 points [-]

and Carl Shulman notes that his approach "has lead [him] astray repeatedly, but I haven't seen as many successes."

That quote may not convey my view, so I'll add to this. I think Eliezer has had a number of striking successes, but in that comment I was saying that it seemed to me he was overshooting more than undershooting with the base rate for dysfunctionality in institutions/fields, and that he should update accordingly and check more carefully for the good reasons that institutional practice or popular academic views often (but far from always) indicate. That doesn't mean one can't look closely and form much better estimates of the likelihood of good invisible reasons, or that the base rate of dysfunction is anywhere near zero. E.g. I think he has discharged the burden of due diligence wrt MWI.

If many physicists say X, and many others say Y and Z which seem in conflict with X, then at a high rate there will be some good arguments for X, Y, and Z. If you first see good arguments for X, you should check to see what physicists who buy Y and Z are saying, and whether they (and physicists who buy X) say they have knowledge that you don't understand.

In the case of MWI, the physicists say they don't have key obscure missing arguments (they are public and not esoteric), and that you can sort interpretations into ones that accept the unobserved parts of the wave function in QM as real (MWI, etc), ones that add new physics to pick out part of the wavefunction to be our world, and ones like shut-up-and-calculate that amount to 'don't talk about whether parts of the wave function we don't see are real.'

Physicists working on quantum foundations are mostly mutually aware of one another's arguments, and you can read or listen to them for their explanations of why they respond differently to that evidence, and look to the general success of those habits of mind. E.g. the past success of scientific realism and Copernican moves: distant lands on Earth that were previously unseen by particular communities turned out to be real, other Sun-like stars and planets were found, biological evolution, etc. Finding out that many of the interpretations amount to MWI under another name, or just refusing to answer the question of whether MWI is true, reduces the level of disagreement to be explained, as does the finding that realist/multiverse interpretations have tended to gain ground with time and to do better among among those who engage with quantum foundations and cosmology.

In terms of modesty, I would say that generally 'trying to answer the question about external reality' is a good epistemic marker for questions about external reality, as is Copernicanism/not giving humans a special place in physics or drastically penalizing theories on which the world is big/human nature looks different (consistently with past evidence). Regarding new physics for objective collapse, I would also note the failure to show it experimentally and the general opposition to it. That seems sufficient to favor the realist side of the debate among physicists.

In contrast, I hadn't seen anything like such due diligence regarding nutrition, or precedent in common law.

Regarding the OP thesis, you could summarize my stance as that assigning 'epistemic peer' or 'epistemic superior/inferior' status in the context of some question of fact requires a lot of information and understanding when we are not assumed to already have reliable fine-grained knowledge of epistemic status. That often involves descending into the object-level: e.g. if the class of 'scientific realist arguments' has a good track record, then you will need to learn enough about a given question and the debate on it to know if that systemic factor is actually at play in the debate before you can know whether to apply that track record in assessing epistemic status.

View more: Next