Comment author: MetricSulfateFive 31 December 2017 03:19:16AM *  14 points [-]

The closest is probably Wild-Animal Suffering Research, since they have published (on their website) a few papers on invertebrate welfare (e.g., Which Invertebrate Species Feel Pain?, An Analysis of Lethal Methods of Wild Animal Population Control: Invertebrates). However, their work doesn't focus exclusively on invertebrates, as they have published some articles that either apply to all animals (e.g., “Fit and Happy”: How Do We Measure Wild-Animal Suffering?), or only apply to vertebrates (e.g., An Analysis of Lethal Methods of Wild Animal Population Control: Vertebrates).

Animal Ethics and Utility Farm also work on issues relating to wild animal suffering. My impression is that AE mostly focuses on outreach (e.g., About Us, leaflets, FB page), and UF mostly focuses on advocacy and social change research (e.g., Study: Effective Communication Strategies For Addressing Wild Animal Suffering, Reviewing 2017 and Looking to 2018), although AE also claims to do some research (mainly moral philosophy literature reviews?). Again, these organizations don't only focus on invertebrates. In fact, AE doesn't even focus solely on wild animals, as they seem to spend significant resources on traditional animal advocacy (farm animals, veganism) as well.

I don't know of any insect-specific charities, although some may exist. Unfortunately, the Society for the Prevention of Cruelty to Insects is only satire. If we widen the scope a bit and include invertebrate-specific charities, I know only of Crustacean Compassion, but there may be others. There was also at one point a website called Invertebrate Considerations that seemed to be EA-aligned, but it's gone now and I don't think it was ever anything more than just a mockup.

Humane insecticides might be a promising area for future work.

Comment author: TruePath 31 December 2017 08:23:24AM 1 point [-]

I'm disappointed that the link about which invertebrates feel pain doesn't go into more detail on the potential distinction between merely learning from damage signals and the actual qualitative experience of pain. It is relatively easy to build a simple robot or write a software program that demonstrates reinforcement learning in the face of some kind of damage but we generally don't believe such programs truly have a qualitative experience of pain. Moreover, the fact that some stimuli are both unpleasant yet rewarding (e.g. encourage repetition) indicates these notions come apart.

Comment author: TruePath 31 December 2017 08:17:13AM 3 points [-]

While this isn't an answer I suspect that if you are interested in insect welfare one first needs a philosophical/scientific program to get a grip on what that entails.

First, unlike other kinds of animal suffering it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population. Thus, unlike large animals, where one can find common ground between various consequentialist moral views it seems quite likely that whether a particular intervention is good or actually harmful for insects will often turn on subtle questions about one's moral views, e.g., average utility or total, does the welfare of possible future beings count, is the life of your average insect a net plus or minus.

As such simply donating to insect welfare risks doing (what you feel is) a great moral harm unless you've carefully considered these aspects of your moral view and chosen interventions that align with them.

Secondly, merely figuring out what makes insects better off is hard. While our intuitions can go wrong its not too unreasonable to think that we can infer other mammals and even vertebrates level of pain/pleasure based on analogies to our own experiences (a dog yelping is probably in pain). However, when it comes to something as different as an insect its unclear if its even safe to assume an insect's neural response to damage even feels unpleasant. After all, surely at some simple enough level of complexity, we don't believe those lifeform's response to damage manifests as a qualitative experience of suffering (even though the tissues in my body can react to damage and even change behavior to avoid further damage without interaction with my brain we don't think my liver can experience pain on its own). At the very least to figure out what kinds of events might induce pain/pleasure responses in an insect would require some philosophical analysis of what is known about insect neurobiology.

Finally, it is quite likely that it will be the indirect effects of any intervention on the wider insect ecosystem rather than any direct effect which will have the largest impact. As such, it would be a mistake to try and engage in any interventions without first doing some in depth research into the downstream effects.

Point of this all is that with respect to insects we need to support academic study and consideration more before actually engaging in any interventions.

Comment author: TruePath 29 August 2017 06:59:00AM *  0 points [-]

I'm a huge supporter of drug policy reform and try to advocate it as much as I can in my personal life. Originally, I was going to post here suggesting we need a better breakdown of particular issues which are particularly ripe for policy reform (say reforming how drug weights are calculated) and the relative effectiveness of various interventions (lobbying, ads, lectures etc..).

However, on reflection I think there might be good reasons not to get involved in this project.

Probably the biggest problem for both EA and drug policy reform is the perception that the people involved are just a bunch of weirdos (we're emotionally stunted nerds and they are a bunch of stoners). This perception reduces donations to EA causes (you don't get the same status boost if its weird) and stops people from listening to the arguments of people in dpr.

If EA is seen as being a big supporter of DPR efforts this risks making the situation worse for both groups. I can just imagine an average lesswrong contributor being interviewed on TV as to why he supports dpr and when the reporter asks him how this affects him personally he starts enthusiastically explaining his use of nootropics and the public dismisses the whole thing as just weird druggies trying to make it legal to get high. This doesn't mean those of us who believe in EA can't quietly donate to dpr organizations but it probably does prevent us from doing what EA does best, determining the particular interventions that work best at a fine grained level and doing that.

This makes me skeptical this is a particularly good place to intervene. If we are going to work in policy change at all we should pick an area where we can push for very particular effective issues without the risk of backlash (to both us and dpr organiations).

Comment author: TruePath 29 July 2017 05:41:55AM *  3 points [-]

This is, IMO, a pretty unpersuasive argument. At least if you are willing, like me, to bite the bullet that SUFFICIENTLY many small gains in utility could make up for a few large gains. I don't even find this particularly difficult to swallow. Indeed, I can explain away our feeling that somehow this shouldn't be true by appealing to our inclination to (as a matter of practical life navigation) to round down sufficiently small hurts to zero.

Also I would suggest that many of the examples that seem problematic are delibrately rigged so the overt description (a world with many people with a small amount of positive utility) presents the situation one way while the flavor text is phrased so as to trigger our empathetic/whats it like response as if it it didn't satisfy the overt description. For instance if we remove the flavor about it being a very highly overpopulated world and simply said consider a universe with many many beings each with a small amount of utility then finding that superior no longer seems particularly troubling. It just states the principle allowing addition of utilities in the abstract. However, sneak in the flavor text that the world is very overcrowded and the temptation is to imagine a world which is ACTIVELY UNPLEASANT to be in, i.e., one in which people have negative utility.

More generally, I find these kind of considerations far more compelling at convincing me I have very poor intuitions for comparing the relative goodness/badness of some kinds of situations and that I better eschew any attempt to rely MORE on those intuitions and dive into the math. In particular, the worst response I can imagine is to say: huh, wow I guess I'm really bad at deciding which situations are better or worse in many circumstances, indeed, one can find cases where A seems better than B better than C better than A considered pairwise, guess I'll throw over this helpful formalism and just use my intuition directly to evaluate which states of affairs are preferable.

In response to Open Thread #37
Comment author: TruePath 23 July 2017 12:06:33PM 0 points [-]

I'm wondering if it is technically possible to stop pyroclastic flows from volcanoes (particularly ones near population centers like vesuivious) by building barriers and if so if its an efficient use of resources. Not quite world changing but it is still a low risk and high impact issue and there are US cities that are near volcanoes.

I'm sure someone has thought of this before and done some analysis.

Comment author: TruePath 26 June 2017 07:33:29AM 4 points [-]

I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.

I don't think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).

More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to be true there has to be a PRINCIPLED fact of the matter about when I become a new person if you slowly modify my brain structure until it matches that of some other possible (but not currently actual) person. The right answer to these Thesus's ship style worries is to shrug and say there isn't any fact of the matter but the presentist can't take that line because there are huge moral implications to where we draw the line for them.

Moreover, both these views have serious puzzles about what to say about when an individual exists. Is it when they actually generate qualia (if not you risk saying that the fact they will exist in the future actually means they exist now)? How do we even know when that happens?

Comment author: TruePath 11 May 2017 07:59:50AM 2 points [-]

Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that and think of ourselves as helping...either by making those suffering feel like failures/hopeless/misunderstood (just get help and it's all good) or causing us to believe we've done our part. Maybe this is just irrational bias I don't know.


While I like the motivation I worry that this article does more to make us feel better that 'something is being done' than it does anything for EA community members with these problems. Indeed, I worry that sharing what amounts to fairly obvious truisms that any google search would reveal actually saps our limited moral energy/consideration for those with mental illness (ohh good we've done our part).

Now I'm sure the poster would defend this piece by saying well maybe most EA people with these afflictions won't get any new information from this but some might not and it's good to inform them. Yes, if informing them were cost free it would. However, there is still a cost in terms of attention, time, pushing readers away from other issues. Indeed, unless you honestly believe that information about every mental illness ought to be posted on every blog around the world it seems we ought to analyze how likely this content on this site is to be useful. I doubt EA members suffer these diseases at a much greater rate than the population in general while I suspect they are informed about these issues at a much greater rate making this perhaps the least effect place to advertise this information.

I don't mean to downplay these diseases. They are serious problems and to the extent there is something we can do with a high benefit/cost ratio we should. So maybe a post identifying media that is particularly likely to serve afflicted individuals who would benefit from this and urging readers to submit this information would be helpful.

Comment author: TruePath 11 May 2017 07:37:31AM 0 points [-]

This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).

I agree that within the EA community and when presenting EA analysis of cost-effectiveness it is important to be upfront with the full complexity of the figures. However, <b>Singer's purpose at TED isn't to carefully pick the most cost effective donations but to force people to confront the fact that cost effectiveness matters.</b>. While those of us already in EA might find a statement like "We prevent 1 year of blindness for every 3 surgeries done which on average cost..." perfectly compelling the audience members who aren't yet persuaded simply tune out. After all it's just more math talk and they are interested in emotional impact. The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact of choosing to help a blind person in the US get a dog rather than many people in poor countries avoid blindness.

Now it's important that we don't simplify in misleading ways but even with the qualifications here it is obvious that it still costs orders of magnitude more to train a dog than prevent blindness via this surgery. Moreover, once one factors in considerations like pain, the imperfect replacement for eyes provided by a dog, etc.. the original numbers are probably too favorable to dog training as far as relative cost effectiveness goes.

This isn't to say that your point here isn't important regarding people inside EA making estimates or givewell analysis or the like. I'm just pointing out that it's important to distinguish the kind of thing being done at a TED talk like this from that being done by givewell. So long as when people leave the TED talk their research leaves the big picture in place (dogs >>>> trauchoma surgery) it's a victory.

Comment author: TruePath 12 January 2017 12:46:14PM 1 point [-]

As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:

1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?

2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.

2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".

3) Is it a good idea to support (or not object) to others who profess to be amassing wealth/power/etc.. to do good

Once broken down this way it is clear that while 1 is obviously true 2 and 3 aren't. Lacking the ability to perfectly bind one's future self means there is always the risk that you will instead use your influence/power for bad ends. 2' raises further concerns as to whether what you believe to be good ends really are good ends. This risk is compounded in 3 by the possibility that the people are simply lying about the good ends.

Once we are precise in this way it is clear that it isn't the in principle approval of amassing power to do good that is at fault but rather the trustworthiness/accuracy of those who undertake such schemes that is the problem.

Having said this some degree of amassing power/influence as a precursor to doing good is probably required. The risks simply must be weighed against the benefits.

Comment author: Denkenberger 17 December 2016 01:54:04PM 1 point [-]

I agree that QALYs are more robust, and I guess it was an earlier version of the paper where we noted that using QALYs would likely produce similar comparison of cost-effectiveness to global poverty interventions. But we wanted to keep this analysis simple, and most people (though perhaps not most EAs) think in terms of saving lives. Also, two definitions of a global catastrophic risk are based on number of lives lost (I believe 10 million according to the book Global Catastrophic Risks and 10% of human population according to Open Philanthropy).

Comment author: TruePath 12 January 2017 12:00:29PM 0 points [-]

That is good to know and I understand the motivation to keep the analysis simple.

As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.

WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?

View more: Next