Comment author: brianwang712 20 July 2017 01:56:28AM 4 points [-]

I think one important reason for optimism that you didn't explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, people's behaviors are strongly influenced by laziness/convenience/self-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if I'm wrong) in the future, people will look for more efficient ways to extract food/labor, and those more efficient ways will happen to involve less suffering; therefore, suffering will decrease in the future. In my head it's the other way around: people are first motivated by their moral concerns, which may then spur them to find efficient technological solutions to these problems. For example, I don't think the cultured meat movement has its roots in trying to find a more cost-effective way to make meat; I think it started off with people genuinely concerned about the suffering of factory-farmed animals. Same with the abolitionist movement to abolish slavery in the US; I don't think industrialization had as much to do with it as people's changing views on ethics.

We reach the same conclusion – that the future is likely to be good – but I think for slightly different reasons.

Comment author: Julia_Wise 21 July 2017 05:37:50PM 6 points [-]

The change in ethical views seems very slow and patchy, though - there are something like 30 million slaves in the world today, compared to 3 million in the US at its peak (I don't know how worldwide numbers have changed over time.)

Comment author: MichaelPlant 10 July 2017 06:32:44PM *  1 point [-]

Thanks for the update. That's helpful.

However, it does seem a bit hard to reconcile GWWC's and 80k's positions on this topic. GWWC (i.e. you) seem to be saying "most EAs care about poverty, so that's what we'll emphasise" whereas 80k (i.e. Ben Todd above) seems to saying "most EAs do (/should?) care about X-risk, so that's what we'll emphasise".

These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.

Comment author: Julia_Wise 13 July 2017 03:00:39PM 0 points [-]

I edited to clarify that I meant members of GWWC, not EAs in general.

Comment author: MichaelPlant 10 July 2017 01:30:03PM 1 point [-]

And what are your/GWWC's thoughts on moral inclusivity?

Comment author: Julia_Wise 10 July 2017 06:24:58PM *  2 points [-]

For as long as it's the case that most of our members [edited to clarify: GWWC members, not members of the EA community in general] are primarily concerned with global health and development, content on our blog and social media is likely to reflect that to some degree.

But we also aim to be straightforward about our cause-neutrality as a project. For example, our top recommendation for donors is the EA Funds, which are designed to get people thinking about how they want to allocate between different causes rather than defaulting to one.

Comment author: Ben_Todd 08 July 2017 09:00:17PM 16 points [-]

Hi Michael,

I agree the issue of people presenting EA as about global poverty when they actually support other causes is a big problem.

80k stopped doing this in 2014 (not a couple of months ago like you mention), with this post: https://80000hours.org/2014/01/which-cause-is-most-effective-300/ The page you link to listed other causes at least as early as 2015: https://web.archive.org/web/20150911083217/https://80000hours.org/articles/cause-selection/

My understanding is that the GWWC website is in the process of being updated, and the recommendations on where to give are now via the EA Funds, which include 4 cause areas.

These issues take a long-time to fix though. First, it takes a long time to rewrite all your materials. Second, it takes people at least several years to catch up with your views. So, we're going to be stuck with this problem for a while.

In terms of how 80,000 Hours handles it:

Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

This is a huge topic, but I disagree. Here are some quick reasons.

First, you should value the far future even if you only put some credence on theories like total utilitarianism.

e.g. Someone who had 50% credence in the person affecting view and 50% credence in total utilitarianism, should still place significant value on the far future.

This is a better approximation of our approach - we're not confident in total utilitarianism, but some weight on it due to moral uncertainty.

Second, even if you don't put any value on the far future, it wouldn't completely change our list.

First, the causes are assessed on scale, neglectedness and solvability. Only scale is affected by these value judgements.

Second, scale is (to simplify) assessed on three factors: GDP, QALYs and % xrisk reduction, as here: https://80000hours.org/articles/problem-framework/#how-to-assess-it

Even if you ignore the xrisk reduction column (which I think would be unreasonable due to moral uncertainty), you often find the rankings don't change that much.

E.g. Pandemic risk gets a scale score of 15 because it might pose at xrisk, but if you ignored that, I think the expected annual death toll from pandemics could easily be 1 million per year right now, so it would still get a score of 12. If you think engineered pandemics are likely, you could argue for a higher figure. So, this would move pandemics from being a little more promising than regular global health, to about the same, but it wouldn't dramatically shift the rankings.

I think AI could be similar. It seems like there's a 10%+ chance that AI is developed within the lifetimes of the present generation. Conditional on that, if there's a 10% chance of a disaster, then the expected death toll is 75 million, or 1-2 million per year, which would also give it a score of 12 rather than 15. But it would remain one of the top ranked causes.

I think the choice of promoting EA and global priorities research are even more robust to different value judgements.

We actively point out that the list depends on value judgements, and we provide this quiz to highlight some of the main ones: https://80000hours.org/problem-quiz/

Comment author: Julia_Wise 10 July 2017 01:19:51PM 7 points [-]

Ben's right that we're in the process of updating the GWWC website to better reflect our cause-neutrality.

Comment author: MichaelPlant 02 July 2017 10:39:32PM 1 point [-]

Could you say what forum volunteering involves and how much time you spend each week doing it?

Comment author: Julia_Wise 04 July 2017 12:45:31PM 3 points [-]

I'm not sure about tech volunteering, I think that's pretty ad hoc.

Moderating involves generally staying aware of what's being posted, removing spam, deciding with other moderators what to do about posts or comments that other users have reported as inappropriate, and sometimes giving feedback to users about how they could improve their posts. Currently it takes less than an hour a week, but if the Forum gets used more I'd expect that to increase.

Comment author: vipulnaik 04 July 2017 07:00:00AM 3 points [-]

Do you foresee any changes being made to the moderation guidelines on the forum? Now that CEA's brand name is associated with it, do you think that could mean forbidding the posting of content that is deemed "not helpful" to the movement, similar to what we see on the Effective Altruists Facebook group?

If there are no anticipated changes to the moderation guidelines, how do you anticipate CEA navigating reputational risks from controversial content posted to the forum?

Comment author: Julia_Wise 04 July 2017 12:34:16PM 7 points [-]

The main reason moderation on the Facebook group works the way it does is that the group has 13000+ members and no ability to downvote, so the ratio of signal to noise would be pretty sad if there were no screening. It's very rare that the Facebook group moderators screen out a post for being harmful - almost everything that we screen out is because it's not relevant enough.

With the Forum, everyone can upvote and downvote, so content that readers find most interesting and relevant gets sorted up to the top that way. There's also a karma threshold to make a post (though we can help newcomers with that if they ask.) So I don't have the same worry about the front page becoming mostly noise.

We still expect to enforce the standards of discussion on the Forum, described in the FAQ ("Spam, abuse and materials advocating major harm or illegal activities are deleted.") But in general we expect that people don't take everything posted on the Forum to represent CEA's view.

Comment author: Julia_Wise 29 May 2017 08:07:31PM 0 points [-]

Reading this years later, I have to say I laughed about the estimate of $250/month on daycare. Where I live, the lowest-end daycare is $75/day.

Comment author: KrisMartens 14 May 2017 02:41:39PM 0 points [-]

Great post. I'll try to make a useful contribution. Maybe this can be of help as well: the APA list of evidence based treatments: - for bipolar disorder http://www.div12.org/psychological-treatments/disorders/bipolar-disorder/ - for psychosis & other related disorders http://www.div12.org/psychological-treatments/disorders/schizophrenia-and-other-severe-mental-illnesses/

Maybe one sentence that can use some more context:

They also listed their most important needs during periods of crisis: Getting rid of voices and paranoia

There is nothing that you can do to help someone getting rid of their voices. On the contrary, encouraging them not to hear voices might make it worse. This is why Acceptance and Commitment Therapy is on the list of evidence based approaches. And why Validation of their experience; someone to listen who could be trusted is on that list of needs as well.

As with most psychopathology, trying not to experience stuff that often results in more of those experiences. Off course, do get help, and medication might help to get rid of voices. But changing how you cope with such experiences is also of use.

Eric Morris is one of the researchers on this topic http://drericmorris.com/ & this is a Twitter feed aimed at contextual behavioral science and psychosis https://twitter.com/ACBSPsychosis

Comment author: Julia_Wise 17 May 2017 07:51:41PM *  0 points [-]

Thank you!

I agree that trying to force hallucinations and paranoia away or talk someone out of them almost never works. I was citing verbatim the list of what people from the NAMI survey listed as their needs.

Just a note that the APA here is the American Psychological rather than Psychiatric Association (both go by APA, confusingly) and lists only talk therapy and social support methods, not including medication. For psychosis in particular, I think virtually anyone in the field would say medication is the first line of treatment. The kinds of treatment listed there are good for ongoing management, but if I ever became psychotic I would absolutely want a psychiatrist or emergency room to be my first stop. Talk therapy would be good to add in later.

Comment author: TruePath 11 May 2017 07:59:50AM -1 points [-]

Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that and think of ourselves as helping...either by making those suffering feel like failures/hopeless/misunderstood (just get help and it's all good) or causing us to believe we've done our part. Maybe this is just irrational bias I don't know.

--

While I like the motivation I worry that this article does more to make us feel better that 'something is being done' than it does anything for EA community members with these problems. Indeed, I worry that sharing what amounts to fairly obvious truisms that any google search would reveal actually saps our limited moral energy/consideration for those with mental illness (ohh good we've done our part).

Now I'm sure the poster would defend this piece by saying well maybe most EA people with these afflictions won't get any new information from this but some might not and it's good to inform them. Yes, if informing them were cost free it would. However, there is still a cost in terms of attention, time, pushing readers away from other issues. Indeed, unless you honestly believe that information about every mental illness ought to be posted on every blog around the world it seems we ought to analyze how likely this content on this site is to be useful. I doubt EA members suffer these diseases at a much greater rate than the population in general while I suspect they are informed about these issues at a much greater rate making this perhaps the least effect place to advertise this information.

I don't mean to downplay these diseases. They are serious problems and to the extent there is something we can do with a high benefit/cost ratio we should. So maybe a post identifying media that is particularly likely to serve afflicted individuals who would benefit from this and urging readers to submit this information would be helpful.


Comment author: Julia_Wise 17 May 2017 07:39:08PM *  0 points [-]

I did question whether this was on-topic enough to be a good fit for this forum. (I don't think awareness about every health issue that affects EAs would be a good use of the space, even if it affects a higher proportion than these problems.)

I do think these problems can be unusually and spectacularly destructive when unchecked, and often even when much effort has been made. I also think most people don't have a good concept of how to recognize these conditions or even what to google; I certainly wouldn't have before getting training as a social worker.

I definitely don't want us to congratulate ourselves for having dealt with these problems, because there have been cases when people in this community have needed help here and not gotten enough. I wrote this in the hope that it will tip the balance in some future crisis toward people having the knowledge they need, not so that we can check this off our list as a solved problem. These are really hard problems to deal with, both for people who have them and for people trying to help, and that's exactly why I wanted a resource available.

I'm so sorry about your friend. This kind of information definitely isn't fail-safe, but I think it's the best we have.

Comment author: Julia_Wise 15 May 2017 02:31:36PM 3 points [-]

Thanks for researching and writing this up! We've been discussing the topic a lot at CEA/Giving What We Can over the last few days. I think this points to the importance of flagging publication dates (as GiveWell does, indicating that the research on a certain page was current as of a given date but isn't necessarily accurate anymore). Fact-checking, updating, or just information flagging as older and possibly inaccurate was on our to-do list for materials on the Giving What We Can site, which go back as much as 10 years and sometimes no longer represent our best understanding. I now think it needs to be higher priority than I did.

For individuals rather than organizations, I'm unsure about the best way to handle things like this, which will surely come up again. If someone publishes a paper or blog post, how often are they obliged to update it with corrected figures? I'm thinking of a popular post which used PSI's figure of around $800 to save a child's life. In 2010 when it was written that seemed like a reasonable estimate, but it doesn't now. Is the author responsible for updating the figure everywhere the post was published and re-published? (That's a strong disincentive for ever writing anything that includes a cost-effectiveness estimate, since they're always changing.) Does everyone who quoted it or referred to it need to go back each year and include a new estimate? My guess is it's good practice, particularly when we notice people creating new material that cites old figures, to give them a friendly note with a link to newer sources, with the understanding that this stuff is genuinely confusing and hard to stay on top of.

View more: Next