Comment author: tomstocker 17 May 2017 10:37:12PM 0 points [-]

I like the way you introduced the calculus, it was artful. I think going one step further would be useful, I.e. Looking at the income distributions of recipients of different interventions and charities.

Comment author: tomstocker 11 February 2017 01:59:26AM *  2 points [-]

Id be interested in long run future and things focused more directly on human wellbeing than generic health and income. Id also be more interested if these groups not only updated on orgs we all know about but also did / collated exploratory work on speculative opportunities.

Comment author: tomstocker 11 February 2017 01:56:01AM 1 point [-]

What will it cost?

Comment author: RobBensinger 09 February 2017 11:21:44PM *  3 points [-]

Three points worth mentioning in response:

  1. Most of the people best-known for worrying about AI risk aren't primarily computer scientists. (Personally, I've been surprised by the number of physicists.)

  2. 'It's self-serving to think that earning to give is useful' seems like a separate thing from 'it's self-serving to think AI is important.' Programming jobs obviously pay well, so no one objects to people following the logic from 'earning to give is useful' to 'earning to give via programming work is useful'; the question there is just whether earning to give itself is useful, which is a topic that seems less related to AI. (More generally, 'technology X is a big deal' will frequently imply both 'technology X poses important risks' and 'knowing how to work with technology X is profitable', so it isn't surprising to find those beliefs going together.)

  3. If you were working in AI and wanted to rationalize 'my current work is the best way to improve the world', then AI risk is really the worst way imaginable to rationalize that conclusion: accelerating general AI capabilities is very unlikely to be a high-EV way to respond to AI risk as things stand today, and the kinds of technical work involved in AI safety research often require unusual skills and background for CS/AI. (Ryan Carey wrote in the past: "The problem here is that AI risk reducers can't win. If they're not computer scientists, they're decried as uninformed non-experts, and if they do come from computer scientists, they're promoting and serving themselves." But the bigger problem is that the latter doesn't make sense as a self-serving motive.)

Comment author: tomstocker 11 February 2017 01:33:25AM 0 points [-]

Except that on point 3, the policies advocated and strategies being tried aren't as if people are trying to reduce x risk, they're as if they're trying to enable AI to work rather than backfire.

Comment author: RomeoStevens 08 February 2017 09:55:32PM 7 points [-]

a general unwillingness to explore new topics.

this feels really obvious from where I'm sitting but is met with incredulity by most EAs I speak with. Applause lights for new ideas paired with a total lack of engagement when anyone talks about new ideas seems more dangerous than I think we're giving credit.

Comment author: tomstocker 11 February 2017 12:54:28AM 5 points [-]

See recent pain control brief lee sharkey as example, or Auren Forrester's stuff on suicide.

Comment author: RobBensinger 07 February 2017 11:04:24PM 8 points [-]

Anonymous #39:

Level of involvement: I'm not an EA, but I'm EA-adjacent and EA-sympathetic.

EA seems to have picked all the low-hanging fruit and doesn't know what to do with itself now. Standard health and global poverty feel like trying to fill a bottomless pit. It's hard to get excited about GiveWell Report #3543 about how we should be focusing on a slightly different parasite and that the cost of saving a life has gone up by $3. Animal altruism is in a similar situation, and is also morally controversial and tainted by culture war. The benefits of more long-shot interventions are hard to predict, and some of them could also have negative consequences. AI risk is a target for mockery by outsiders, and while the theoretical arguments for its importance seem sound, it's hard to tell whether an organization is effective in doing anything about it. And the space of interventions in politics is here-be-dragons.

The lack of salient progress is a cause of some background frustration. Some of those who think their cause is best try to persuade others in the movement, but to little effect, because there's not much new to say to change people's minds; and that contributes to the feeling of stagnation. This is not to say that debate and criticism are bad; being open to them is much better than the alternative, and the community is good at being civil and not getting too heated. But the motivation for them seems to draw more from ingrained habits and compulsive behavior than from trying to expose others to new ideas. (Because there aren't any.)

Others respond to the frustration by trying to grow the movement, but that runs into the real (and in my opinion near-certain) dangers of mindkilling politics, stifling PR, dishonesty (Sarah Constantin's concerns), and value drift.

And others (there's overlap between these groups) treat EA as a social group, whether that means house parties or memes. Which is harmless fun in itself, but hardly an inspiring direction for the movement.

What would improve the movement most is a wellspring of new ideas of the quality that inspired it to begin with. Apart from that, it seems quite possible that there's not much room for improvement; most tradeoffs seem to not be worth the cost. That means that it's stuck as it is, at best -- which is discouraging, but if that's the reality, EAs should accept it.

Comment author: tomstocker 11 February 2017 12:51:55AM 1 point [-]

What's wrong with low hanging fruit? Not entertaining enough?

Comment author: Lee_Sharkey 07 February 2017 09:27:23AM *  2 points [-]

Hi Tom,

Great to hear that it's been suggested. By the looks of it, it may be an area better suited to an Open Philanthropy Project-style approach, being primarily a question of policy and having a sparser evidence base and impact definition difficulties. I styled my analysis around OPP's approach (with some obvious shortcomings on my part).

I could have done better in the analysis to distinguish between the various types of pain. As you say, they are not trivial distinctions, especially when it comes to treatment with opioids.

I'd be interested to hear your take on the impact of pain control on the nature of medicine and the doctor-patient dynamic. What trends are you concerned about hastening exactly?

Comment author: tomstocker 07 February 2017 08:14:01PM 1 point [-]

The shift from patient as recipient of medicine from clinician with authority (old style developed world and much of e.g. Africa) to patient as consumer. There are good and bad things with this transition. Pain, pain control and patient perceptions are just under-studied as a nexus. Not a reason not to go ahead, just my biggest worry with this stuff. (I personally don't think risk of death / side effects are much of a worry at all when we're talking about opioid availability in inpatient settings).

Comment author: tomstocker 06 February 2017 09:54:47PM 2 points [-]

Ben, I'm impressed - thank you for sharing and wish you continued success with the business despite the changing political environment.

Comment author: Lee_Sharkey 02 February 2017 10:40:58PM 6 points [-]

Hi Elizabeth,

I focus on opioid medications for the same reasons that I don't focus on cannabinoids:

  • There isn't strong expert consensus on the effectiveness of cannabinoids. This may change as the search for alternative drugs, particularly for chronic pain, intensifies. While there are some areas that will likely see their use increase (you justly highlight neuropathic pain), my understanding is that current evidence doesn't reliably indicate their effectiveness for severe pain. All this said, there are good reasons to believe they are understudied, both as single interventions and as adjuvants. I should perhaps have elaborated on this and similar research avenues in the article. Thank you for bringing attention to this issue.

  • Opioid medications, although controlled and functionally inaccessible, are legal medicines in all countries. With few, well-evidence cannabinoid medications approved for use, and only in a handful of countries, it's unlikely that fighting to approve members of a controversial drug class of questionable efficacy for many medical indications is the best way to bring pain relief to patients in developing countries (It could be incredibly effective if generating widespread acceptance of cannabinoid medications, through a long causal chain, ended up driving more rational controlled substances policies. But this is far from a neglected and tractable cause).

For the above two reasons, the movement to increase access to opioid medications has historical precedent on its side and solid expert consensus on their efficacy (even if their dangers are debated). It seems that they comprise an essential component of the best solution (however imperfect) to the gross deficiency of analgesia in the majority of contexts globally. But you're correct to highlight what may be the least explored part of the analysis.

Comment author: tomstocker 06 February 2017 09:36:52PM 3 points [-]

I'm really happy to see this article - I mentioned it to givewell a while ago but they weren't interested. For me this hits what I see as the moral priority more than a lot of the other projects and options on the go.

Simple, complex and neuropathic pains respond differently to different anaelgasics. Opioids v effective for simple pain over the short term, e.g. surgeries, broken bones etc. Neuropathic and complex pain don't have good equivalents for pain relief and patients are stuck with cannabinoids, anti-epileptics and anti-depressants (or, ketamine, ironically, if it wasn't so restricted in the developed world for its noted impact on organ function).

Not a reason not to back access to opioids in the developing world.

Least well explored part IMO is the impact of pain control on the nature of medicine and doctor-patient interaction etc. because the west may have fallen into a trap that it may be a shame to hasten in the developing world.

Comment author: tomstocker 24 November 2016 02:16:36PM 0 points [-]

Unintended are harder for campaigns to avoid than even governments from where I'm sitting. But yes worth looking at more and yes I'm interested. Nice post.

View more: Next