Comment author: John_Maxwell_IV 20 June 2018 07:59:10AM *  1 point [-]

I don't think Bitcoin became popular on the strength of its wikis. I also do think that free software projects cannibalizing one another can be harmful, e.g. there was a period where Python had a bunch of serious web frameworks whereas Ruby just had one or two, and I think that was good for the Ruby side because the ecosystem that built up around those one or two was deeper.

Gwern wrote this essay about why non-Wikipedia wikis have a hard time competing with Wikipedia. He recommends using Wikipedia when possible and only falling back on specialized wikis for things Wikipedia won't allow. So that might be a path forwards.

Comment author: gwern 22 June 2018 02:33:10AM *  5 points [-]

Bitcoin definitely didn't become popular because of its wiki. Early on I wanted to contribute to the wiki (I think as part of my DNM work) and I went to register and... you had to pay bitcoins to register. -_- I never did register or edit it, IIRC. And certainly people didn't use it too much aside from early on use of the FAQ.

An EA wiki would be sensible. In this case, while EAers probably spend too little time adding standard factual material to Wikipedia, material like 'cause prioritization' would be poor fits for Wikipedia articles because they necessarily involve lots of Original Research, a specific EA POV, coverage of non-Notable topics and interventions (because if they were already Notable, then they might not be a good use of resources for EA!), etc.

My preference for special-purpose wikis is to try to adopt a two-tier structure where all the factual standard material gets put into Wikipedia, benefiting from the fully-built-out set of encyclopedia articles & editing community & tools & traffic, and then the more controversial, idiosyncratic stuff building on that foundation appears on a special-purpose wiki. But I admit I have no proof that this strategy works in general or would be suitable for a cause-prioritization wiki. (At least one problem is that people won't read the relevant WP article while reading the individual special-purpose wiki, because of the context switch.)

Comment author: BenHoffman 29 March 2018 03:34:34AM 1 point [-]

If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical.

It's hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets. If absence of this evidence really isn't strong evidence of no effect, it should be possible to show it with specific numbers and not just handwaving about noise. And I'd expect that to be mentioned in the top-level summary on bed net interventions, not buried in a supplemental page.

Comment author: gwern 22 June 2018 01:44:03AM *  2 points [-]

It's hard for me to believe that the effect of bednets is large enough to show an effect in RCTs, but not large enough to show up more often than not as a result of mass distribution of bednets.

You may find it hard to believe, but nevertheless, that is the fact: correlational results can easily be several times the true causal effect, in either direction. If you really want numbers, see, for example, the papers & meta-analyses I've compiled in https://www.gwern.net/Correlation on comparing correlations with the causal estimates from simultaneous or later conducted randomized experiments, which have plenty of numbers. Hence, it is easy for a causal effect to be swamped by any time trends or other correlates, and a followup correlation cannot and should not override credible causal results. This is why we need RCTs in the first place. Followups can do useful things like measure whether the implementation is being delivered, or can provide correlational data on things not covered by the original randomized experiments (like unconsidered side effects), but not retry the original case with double jeopardy.

Comment author: gwern 21 July 2017 09:24:48PM *  3 points [-]

The data was noisy, so they simply stopped checking whether AMF’s bed net distributions do anything about malaria.

This is an unfair gotcha. What would the point of this be? Of course the data is noisy. Not only is it noisy, it is irrelevant - if it was not, there would never be any need to have run randomized trials in the first place, you would simply dump the bed nets where convenient and check malaria rates. The whole point of randomized trials is realizing that correlational data is extremely weak and cannot give reliable causal inferences. (I can certainly imagine reasons why malaria rates might go up in regions that AMF does bed net distribution in, just as I can imagine reasons why death rates might be greater or increase over time in patients prescribed new drug X as compared to patients not prescribed X...) If they did the followups and malaria rates held stable or increased, you would not then believe that the bednets do not work; if it takes randomized trials to justify spending on bednets, it cannot then take only surveys to justify not spending on bed nets, as the causal question is identical. Since it does not affect any decisions, it is not important to measure. Or, if it did, what you ought to be criticizing Givewell & AMF for, as well as everyone else, is ever advocating & spending resources on highly unethical randomized trials, rather than criticizing them for not doing some followup surveys.

(A reasonable critique might be that they are not examining whether the intervention - which has been identified as causally effective and passing a cost-benefit - is being correctly delivered, the right people getting the nets, and using the nets. But as far as I know, they do track that...)

Comment author: Pablo_Stafforini 08 January 2015 07:50:48PM *  7 points [-]

Lila mentions cancer, which I think is very instructive in this context. A "war on cancer" was declared about 45 years ago. Since then, 100-300 billion dollars have been spent on cancer R&D. Very little progress has been made. Why should we expect a SENS-inspired "war on aging" to make lots of progress, on all seven causes of aging (of which cancer is just one), in one third of that time, with one hundredth of that budget?

EDIT: The paragraph above overstretches the analogy bewteen cancer and aging. See Gwern's comment below.

Comment author: gwern 17 May 2017 02:35:40AM 4 points [-]

Saying 'very little progress' seems to considerably understate it; many cancers are now treatable which were untreatable, and even former death sentences can be cured. As well, much of that research was spent in the past on expensive but obsolete methods or on building knowledge bases and tools which are now available for anti-aging research. (While Apollo may have cost $26b to put a man on the moon in 1969, it should not then cost another $26b in 2017 to put another man on the moon.)

Comparing with cancer is interesting in part because they're so different. Cancer is a hostile self-reproducing ecosystem which literally evolves as it is treated; aging and senescent cells, however, appear to be none of those. For example, it appears to be a lot easier to trick a senescent cell into committing suicide than a cancer cell.

Why should we expect a SENS-inspired "war on aging" to make lots of progress, on all seven causes of aging

Do you really need progress on all 7? Mortality with age follows a Gompertz distribution which has an exponential term increasing mortality risk and a baseline hazard/risk; interventions on the aging process itself, as opposed to tinkering with improved fixes for symptoms like cancer, would seem like they would affect the exponential term and not the hazard term. Since the Gompertz mortality curve is dominated by the exponential term, not the baseline hazard ratio, even small reductions in the aging rate lead to large changes in life expectancy. (In contrast, large reductions in the hazard ratio, like halving, only add a few years.)