Comment author: RomeoStevens 01 July 2018 04:35:55PM 2 points [-]

Good stuff! You might be interested in both OODA loops and Marr's levels of analysis.

Comment author: Peter_Hurford  (EA Profile) 19 June 2018 02:46:10AM 3 points [-]

it might also be nice if there was a repository of info on why some common cause areas are not generally recommended by EA

Good idea. I had been experimenting by adding summaries at the top of some articles (for example this one on aging) and was trying to figure out how opinionated the Wiki should be. Right now I was trying to err on the side of being less opinionated. If you have any thoughts on this issue, I'd definitely be curious to hear them.

I'm unsure how one would incentivize such info being added though.

We're hoping to eventually and slowly create a volunteer pool to do this kind of work. This seems like the kind of tasks volunteers have done well on in my past experience. Furthermore, given funding, we'd even be able to pay for the assistance.

Comment author: RomeoStevens 19 June 2018 04:02:08PM *  2 points [-]

Maybe also a prize for best new wiki entry periodically?

How opinionated it is probably comes down to tone more than content. Less 'and this is why everyone who supports education is stupid' and more 'this is the story on education studies so far. We hope this can be of assistance to someone trying to develop new educational interventions so they don't go down the same blind alleys as previously' that could help. It could also harm in the sense that controversy engenders engagement and a more confrontational approach would get people to actually argue.

Also I'd like to note that I'm bullish on this idea overall as I think it might allow for genuine philosophical progress. Part of the lack of progress comes from the fragmentary nature of all the various arguments, making people very hesitant to offer critiques since a likely outcome is 'that has already been addressed in 3 places.' We tend towards a community of correctors, which shuts down generative creative thought.

Comment author: RomeoStevens 19 June 2018 01:55:40AM 4 points [-]

In addition to cataloging sources of data and analysis for current and potential EA causes, it might also be nice if there was a repository of info on why some common cause areas are not generally recommended by EA. I'm unsure how one would incentivize such info being added though.

Comment author: RomeoStevens 05 June 2018 11:08:20PM 8 points [-]

The biggest risk seems to be in the hotel manager position. My guess is there is underestimation of the learning curve and ongoing maintenance costs/time to run a 17 person hotel.

Comment author: RomeoStevens 28 May 2018 05:50:34PM 1 point [-]

Another way to frame it is thinking about Marr's three levels of analysis. The computational (what are we even trying to do?), the algorithmic (what algorithms/heuristics should we run given we want to accomplish that?), and implementation (what, concretely should our next actions be to implement those algorithms in reality?). Cleanly separating which step you are working on prevents confusion.

Comment author: Eva 22 May 2018 12:28:04AM 3 points [-]

And when groups do work on these issues there is a tendency towards infighting.

Some things that could help:

  • Workshops that bring people together. It's harder to misinterpret someone's work when they are describing it in front of you, and it's easier to make fast progress towards a common goal (and to increase the salience of the goal).
  • Explicitly recognizing that the community is small and needs nurturing. It's natural for people to at first be scared that someone else is in their coveted area (career concerns), but overall I think it might be a good thing even on a personal level. It's such a neglected topic that if people work together and help bring attention to it real progress could be made. In contrast, sometimes you see a subfield where people are so busy tearing down each other's work that nothing can get published or funded - a much worse equilibrium.

Bringing people together is hugely important to working constructively.

Comment author: RomeoStevens 22 May 2018 09:35:41PM 0 points [-]

when groups do work on these issues there is a tendency towards infighting.

Do you think this is a side effect of the-one-true-ontology issues?

Do you happen to know which conferences research results in this area tend to get presented at or which journals they tend to get published in? Could be useful to bootstrap from those networks. I've been tracing some citation chains from highly cited stat papers, but it's very low signal to noise for meta-research vs esoteric statistical methods.

Comment author: RomeoStevens 21 May 2018 06:44:20PM 2 points [-]

Really happy to see this get some attention. I think this is where the biggest potential value add of EA lies. Very very few groups are prepared to do work on methodological issues. Those that do seem to generally get bogged down in object level implementation details quickly (See: the output of METRICS for example.) Method work is hard, connecting people and resources to advance it is neglected.

Comment author: Michael_S 21 May 2018 04:43:24PM 3 points [-]

I agree that limitations on RCTs are a reason to devalue them relative to other methodologies. They still add value over our priors, but I think the best use cases for RCTs are when they're cheap and can be done at scale (Eg. in the context of online surveys) or when you are randomizing an expensive intervention that would be provided anyway such that the relative cost of the RCT is cheap.

When costs of RCTs are large, I think there's reason to favor other methodologies, such as regression discontinuity designs, which have faired quite well compared to RCTs (https://onlinelibrary.wiley.com/doi/abs/10.1002/pam.22051).

Comment author: RomeoStevens 21 May 2018 06:42:06PM 0 points [-]

Would the development of a VoI checklist be helpful here? Heuristics and decision criteria similar to the flowchart that the Campbell collab. has for experimental design heuristics.

Comment author: Anders_Huitfeldt 21 May 2018 05:56:44PM 2 points [-]

I also conduct research on the generalizability issue, but from a different perspective. In my view, any attempt to measure effect heterogeneity (and by extension, research generalizability) is scale dependent. It is very difficult to tease apart genuine effect heterogeneity from the appearance of heterogeneity due to using an inappropriate scale to measure the effects.

In order to to get around this, I have constructed a new scale for measuring effects, which I believe is more natural than the alternative measures. My work on this is available on arXiv at https://arxiv.org/abs/1610.00069 . The paper has been accepted for publication at the journal Epidemiologic Methods, and I plan to post a full explanation of the idea here and on Less Wrong when it is published (presumably, this will be a couple of weeks from now).

I would very much appreciate feedback on this work, and as always, I operate according to Crocker's Rules.

Comment author: RomeoStevens 21 May 2018 06:39:54PM *  2 points [-]

I think counterfactual outcome state transition parameters is a bad name in that it doesn't help people identify where and why they should use it, nor does it communicate all that well what it really is. I'd want to thesaurus each of the key terms in order to search for something punchier. You might object that essentially 'marketing' an esoteric statistics concept seems perverse, but papers with memorable titles do in fact outperform according to the data AFAIK. Sucks but what can you do?

I bother to go into this because this research area seems important enough to warrant attention and I worry it won't get it.

Comment author: Evan_Gaensbauer 23 April 2018 11:00:40PM *  1 point [-]

This doesn't add much to the conversation. Obviously people get over-excited by EA and the personal and philosophical opportunities it provides to make an impact will lead lots of people being overconfident in their long-term commitment, and they'll turn out not to be as altruistic as they think. The OP is already concerned about a default state of people becoming less altruistic over time, and focuses on how we can keep ourselves more altruistic than we'd otherwise tend to be, long-term, through things like commitment mechanisms. So theories of psychology which don't specify the mechanisms by which commitment devices fail aren't precise enough to be useful in answering the question of what to do about value drift to our satisfaction.

Comment author: RomeoStevens 04 May 2018 10:14:54PM 1 point [-]

I wasn't commenting on the overall intention but on enumerations of causal levers outlined by economists in the talks given. I was objecting to the frame that these causal levers are obfuscated. I think presenting them as such is a way around them being low status to talk about directly.

View more: Next