Comment author: brianwang712 20 July 2017 01:56:28AM 4 points [-]

I think one important reason for optimism that you didn't explicitly mention is the expanding circle of moral concern, a la Peter Singer. Sure, people's behaviors are strongly influenced by laziness/convenience/self-interest, but they are also influenced by their own ethical principles, which in a society-wide sense have generally grown better and more sophisticated over time. For the two examples that you give, factory farming and slavery, your view seems to be that (and correct me if I'm wrong) in the future, people will look for more efficient ways to extract food/labor, and those more efficient ways will happen to involve less suffering; therefore, suffering will decrease in the future. In my head it's the other way around: people are first motivated by their moral concerns, which may then spur them to find efficient technological solutions to these problems. For example, I don't think the cultured meat movement has its roots in trying to find a more cost-effective way to make meat; I think it started off with people genuinely concerned about the suffering of factory-farmed animals. Same with the abolitionist movement to abolish slavery in the US; I don't think industrialization had as much to do with it as people's changing views on ethics.

We reach the same conclusion – that the future is likely to be good – but I think for slightly different reasons.

Comment author: Julia_Wise 21 July 2017 05:37:50PM 7 points [-]

The change in ethical views seems very slow and patchy, though - there are something like 30 million slaves in the world today, compared to 3 million in the US at its peak (I don't know how worldwide numbers have changed over time.)

Comment author: MichaelPlant 10 July 2017 06:32:44PM *  1 point [-]

Thanks for the update. That's helpful.

However, it does seem a bit hard to reconcile GWWC's and 80k's positions on this topic. GWWC (i.e. you) seem to be saying "most EAs care about poverty, so that's what we'll emphasise" whereas 80k (i.e. Ben Todd above) seems to saying "most EAs do (/should?) care about X-risk, so that's what we'll emphasise".

These conclusions seem to be in substantial tension, which itself is may confuse new and old EAs.

Comment author: Julia_Wise 13 July 2017 03:00:39PM 0 points [-]

I edited to clarify that I meant members of GWWC, not EAs in general.

Comment author: MichaelPlant 10 July 2017 01:30:03PM 1 point [-]

And what are your/GWWC's thoughts on moral inclusivity?

Comment author: Julia_Wise 10 July 2017 06:24:58PM *  2 points [-]

For as long as it's the case that most of our members [edited to clarify: GWWC members, not members of the EA community in general] are primarily concerned with global health and development, content on our blog and social media is likely to reflect that to some degree.

But we also aim to be straightforward about our cause-neutrality as a project. For example, our top recommendation for donors is the EA Funds, which are designed to get people thinking about how they want to allocate between different causes rather than defaulting to one.

Comment author: Ben_Todd 08 July 2017 09:00:17PM 16 points [-]

Hi Michael,

I agree the issue of people presenting EA as about global poverty when they actually support other causes is a big problem.

80k stopped doing this in 2014 (not a couple of months ago like you mention), with this post: https://80000hours.org/2014/01/which-cause-is-most-effective-300/ The page you link to listed other causes at least as early as 2015: https://web.archive.org/web/20150911083217/https://80000hours.org/articles/cause-selection/

My understanding is that the GWWC website is in the process of being updated, and the recommendations on where to give are now via the EA Funds, which include 4 cause areas.

These issues take a long-time to fix though. First, it takes a long time to rewrite all your materials. Second, it takes people at least several years to catch up with your views. So, we're going to be stuck with this problem for a while.

In terms of how 80,000 Hours handles it:

Their cause selection choices, which I think they updated a few months ago only really make sense if you adopt total utilitarianism (maximise happiness throughout history of the universe) rather than if you prefer a person-affecting view in population ethics (make people happy, don’t worry about creating happy people) or you just want to focus on the near future (maybe due to uncertainty about what we can do or pure time discounting).

This is a huge topic, but I disagree. Here are some quick reasons.

First, you should value the far future even if you only put some credence on theories like total utilitarianism.

e.g. Someone who had 50% credence in the person affecting view and 50% credence in total utilitarianism, should still place significant value on the far future.

This is a better approximation of our approach - we're not confident in total utilitarianism, but some weight on it due to moral uncertainty.

Second, even if you don't put any value on the far future, it wouldn't completely change our list.

First, the causes are assessed on scale, neglectedness and solvability. Only scale is affected by these value judgements.

Second, scale is (to simplify) assessed on three factors: GDP, QALYs and % xrisk reduction, as here: https://80000hours.org/articles/problem-framework/#how-to-assess-it

Even if you ignore the xrisk reduction column (which I think would be unreasonable due to moral uncertainty), you often find the rankings don't change that much.

E.g. Pandemic risk gets a scale score of 15 because it might pose at xrisk, but if you ignored that, I think the expected annual death toll from pandemics could easily be 1 million per year right now, so it would still get a score of 12. If you think engineered pandemics are likely, you could argue for a higher figure. So, this would move pandemics from being a little more promising than regular global health, to about the same, but it wouldn't dramatically shift the rankings.

I think AI could be similar. It seems like there's a 10%+ chance that AI is developed within the lifetimes of the present generation. Conditional on that, if there's a 10% chance of a disaster, then the expected death toll is 75 million, or 1-2 million per year, which would also give it a score of 12 rather than 15. But it would remain one of the top ranked causes.

I think the choice of promoting EA and global priorities research are even more robust to different value judgements.

We actively point out that the list depends on value judgements, and we provide this quiz to highlight some of the main ones: https://80000hours.org/problem-quiz/

Comment author: Julia_Wise 10 July 2017 01:19:51PM 7 points [-]

Ben's right that we're in the process of updating the GWWC website to better reflect our cause-neutrality.

Comment author: MichaelPlant 02 July 2017 10:39:32PM 1 point [-]

Could you say what forum volunteering involves and how much time you spend each week doing it?

Comment author: Julia_Wise 04 July 2017 12:45:31PM 3 points [-]

I'm not sure about tech volunteering, I think that's pretty ad hoc.

Moderating involves generally staying aware of what's being posted, removing spam, deciding with other moderators what to do about posts or comments that other users have reported as inappropriate, and sometimes giving feedback to users about how they could improve their posts. Currently it takes less than an hour a week, but if the Forum gets used more I'd expect that to increase.

Comment author: vipulnaik 04 July 2017 07:00:00AM 3 points [-]

Do you foresee any changes being made to the moderation guidelines on the forum? Now that CEA's brand name is associated with it, do you think that could mean forbidding the posting of content that is deemed "not helpful" to the movement, similar to what we see on the Effective Altruists Facebook group?

If there are no anticipated changes to the moderation guidelines, how do you anticipate CEA navigating reputational risks from controversial content posted to the forum?

Comment author: Julia_Wise 04 July 2017 12:34:16PM 7 points [-]

The main reason moderation on the Facebook group works the way it does is that the group has 13000+ members and no ability to downvote, so the ratio of signal to noise would be pretty sad if there were no screening. It's very rare that the Facebook group moderators screen out a post for being harmful - almost everything that we screen out is because it's not relevant enough.

With the Forum, everyone can upvote and downvote, so content that readers find most interesting and relevant gets sorted up to the top that way. There's also a karma threshold to make a post (though we can help newcomers with that if they ask.) So I don't have the same worry about the front page becoming mostly noise.

We still expect to enforce the standards of discussion on the Forum, described in the FAQ ("Spam, abuse and materials advocating major harm or illegal activities are deleted.") But in general we expect that people don't take everything posted on the Forum to represent CEA's view.

17

Changes to the EA Forum

Over the last several years, the EA Forum has been run on a volunteer-led basis. Given how much the EA community has grown, the volunteers who have been running the Forum have decided to transition primary responsibility for the  EA Forum to the Centre for Effective Altruism. In practice, this... Read More
13

Upcoming AMA with Luke Muehlhauser on consciousness and moral patienthood (June 28, starting 9am Pacific)

Luke Muehlhauser of the  Open Philanthropy Project  recently published a  major report  on animal consciousness and the question of "moral patienthood" — i.e. which beings merit moral concern? The purpose of the report is to inform Open Phil's grantmaking, especially in its  farm animal welfare  focus area. Luke would like... Read More
Comment author: Julia_Wise 29 May 2017 08:07:31PM 0 points [-]

Reading this years later, I have to say I laughed about the estimate of $250/month on daycare. Where I live, the lowest-end daycare is $75/day.

Comment author: KrisMartens 14 May 2017 02:41:39PM 0 points [-]

Great post. I'll try to make a useful contribution. Maybe this can be of help as well: the APA list of evidence based treatments: - for bipolar disorder http://www.div12.org/psychological-treatments/disorders/bipolar-disorder/ - for psychosis & other related disorders http://www.div12.org/psychological-treatments/disorders/schizophrenia-and-other-severe-mental-illnesses/

Maybe one sentence that can use some more context:

They also listed their most important needs during periods of crisis: Getting rid of voices and paranoia

There is nothing that you can do to help someone getting rid of their voices. On the contrary, encouraging them not to hear voices might make it worse. This is why Acceptance and Commitment Therapy is on the list of evidence based approaches. And why Validation of their experience; someone to listen who could be trusted is on that list of needs as well.

As with most psychopathology, trying not to experience stuff that often results in more of those experiences. Off course, do get help, and medication might help to get rid of voices. But changing how you cope with such experiences is also of use.

Eric Morris is one of the researchers on this topic http://drericmorris.com/ & this is a Twitter feed aimed at contextual behavioral science and psychosis https://twitter.com/ACBSPsychosis

Comment author: Julia_Wise 17 May 2017 07:51:41PM *  0 points [-]

Thank you!

I agree that trying to force hallucinations and paranoia away or talk someone out of them almost never works. I was citing verbatim the list of what people from the NAMI survey listed as their needs.

Just a note that the APA here is the American Psychological rather than Psychiatric Association (both go by APA, confusingly) and lists only talk therapy and social support methods, not including medication. For psychosis in particular, I think virtually anyone in the field would say medication is the first line of treatment. The kinds of treatment listed there are good for ongoing management, but if I ever became psychotic I would absolutely want a psychiatrist or emergency room to be my first stop. Talk therapy would be good to add in later.

View more: Next