Comment author: kbog  (EA Profile) 14 July 2017 10:25:40PM *  0 points [-]

Well sure, if we proceed from the assumption that the moral theory really was correct, but the point was that none of those proposed theories has been generally accepted by moral philosophers.

Your point was that "none of the existing ethical theories are up to the task of giving us such a set of principles that, when programmed into an AI, would actually give results that could be considered "good"." But this claim is simply begging the question by assuming that all the existing theories are false. And to claim that a theory would have bad moral results is different from claiming that it's not generally accepted by moral philosophers. It's plausible that a theory would have good moral results, in virtue of it being correct, while not being accepted by many moral philosophers. Since there is no dominant moral theory, this is necessarily the case as long as some moral theory is correct.

I gave one in the comment? That philosophy has accepted that you can't give a set of human-comprehensible set of necessary and sufficient criteria for concepts

If you're referring to ethics, no, philosophy has not accepted that you cannot give such an account. You believe this, on the basis of your observation that philosophers give different accounts of ethics. But that doesn't mean that moral philosophers believe it. They just don't think that the fact of disagreement implies that no such account can be given.

It seems obvious to me that physics does indeed proceed by social consensus in the manner you describe. Someone does an experiment, then others replicate the experiment until there is consensus that this experiment really does produce these results; somebody proposes a hypothesis to explain the experimental results, others point out holes in that hypothesis, there's an extended back-and-forth conversation and further experiments until there is a consensus that the modified hypothesis really does explain the results and that it can be accepted as an established scientific law. And the same for all other scientific and philosophical disciplines. I don't think that ethics is special in that sense.

So you haven't pointed out any particular features of ethics, you've merely described a feature of inquiry in general. This shows that your claim proves too much - it would be ridiculous to conduct physics by studying psychology.

Sure, there is a difference between what ordinary people believe and what people believe when they're trained professionals: that's why you look for a social consensus among the people who are trained professionals and have considered the topic in detail, not among the general public.

But that's not a matter of psychological inquiry, that's a matter of looking at what is being published in philosophy, becoming familiar with how philosophical arguments are formed, and staying in touch with current developments in the field. So you are basically describing studying philosophy. Studying or researching psychology will not tell you anything about this.

Comment author: SamDeere 14 July 2017 09:00:45PM 2 points [-]

Hey Josh, thanks for the comment and sorry for the wait on a response.

The TL;DR is that I think that the branding changes provide a small amount of upside in terms of consistency, and have low risk of downside, because I don't expect that they'll significantly change discoverability, forum composition, or that they'll counterfactually change people's impressions of the different parts of the EA online space.

Our primary motivation is to reduce the proliferation of very similar domain names that all correspond to different things (e.g. effective-altruism.com is the Forum, previously effectivealtruism.com was the Doing Good Better site etc). From our perspective it seems useful to consolidate community assets under the same domain, both from the perspective of users seeing them as part of a broadly unified whole, and in the longer term, from a technical perspective (e.g. easier to share logins between different sites on the same domain). I agree that it's probably good to keep some branding differentiation between the Forum and the front page of EffectiveAltruism.org, however I think it's disingenuous for us to pretend that there's no overlap.

Perhaps a good analogy is YCombinator/Hacker News — the front page presents a more welcoming, informative front, whereas Hacker News has a pretty intense community and may not always be welcoming to newcomers. However, I think people are generally pretty good at understanding that the organization and the user-generated content are different things, while understanding them to be part of the same broad sphere.

I wholeheartedly agree that the Forum is a more advanced part of the community, and it's certainly not our intention to try to dilute the quality of conversation or flood it with newcomers who may lack the context to meaningfully contribute to some of the more in-depth discussions or may find the tone unwelcoming. However, this seems like an issue of discoverability. The Forum is already pretty discoverable (fourth result for 'effective altruism' on Google), so if someone totally new is doing a wide survey of what the EA online space is like, they'll find it (and it already has 'Effective Altruism' in the name...). However, we're not planning on adding additional links to it from the www domain, or changing how we market it in other channels — I don't expect this change to significantly change the composition of people posting on the forum, nor do I expect that it significantly changes how people will view the broad idea of 'effective altruism' (especially not relative to the status quo).

Given that there's already a strong association between EA and the EA Forum, I don't think the exact domain matters that much. If we didn't want there to be any association, we should probably take the words 'effective altruism' out of the title and have a completely different domain. This isn't something we're currently considering.

I'd prefer to use a subdomain rather than a nested route because it's a significantly simpler DNS/server setup. I think the SEO point is a bit counter to the other points. I agree that it will have some SEO implications, but if the issue is discoverability, then actually making the Forum less discoverable in a random search seems to work more to your purposes (as above, currently the Forum is the fourth result on Google). In terms of implementation, we're planning to rewrite the old domain to the new one (using 301 redirects and keeping the old domain active to prevent broken links). I'd also planned to advise Google of the domain change using Search Console. I'd be very happy to hear from you if there are additional steps that you think are important here.

Comment author: HowieL 14 July 2017 06:24:04PM 1 point [-]

I obviously can't speak for GWWC but I can imagine some reasons it could reach different conclusions. For example, GWWC is a membership organization and might see itself as, in part, representing its members or having a duty to be responsive to their views. At times, listeners might understand statements by GWWC as reflecting the views of its membership.

80k's mission seems to be research/advising so its users might have more of an expectation that statements by 80k reflect the current views of its staff.

Comment author: jwbayou5 14 July 2017 05:01:04PM 0 points [-]

This is great!

Comment author: Vincent-Soderberg 14 July 2017 04:14:20PM 1 point [-]

suggestion for possible low hanging fruit: getting DGB, The life you can save, and 80k into all the libraries in netherlands. Im constantly surprised how few libraries have the books in sweden, and the benefit of it is that once you get it in, at least a few people will read it, and it gets easier for a potental EA to get into EA if there is good reading material in their vicinity. thats my idea at least

other then that, i'll be going to Fest i Nord (a mormon convent), and i'll likely meet someone from netherlands. I'll be sure to mention the EAN to them!

Comment author: Vincent-Soderberg 14 July 2017 10:48:38AM 1 point [-]

Interesting read!

Just a thought: does anyone have any thoughts on religion and EA? I don't mean it in a "saving souls is cost effective" way, more in the moral philosophy way.

My personal take is that unless someone is really hardcore/radical/orthodox, then most of what EA says would be positive/ethical for most religious persons. That is certainly my experience talking to religious folks, no one has ever gotten mad at me unless i get too consequentalist. Religious people might even be more open to giving what we can pledge, and EA altruism in some ways, because of the common practise of tithing. though they might decide that "my faith is the most cost effective", but that only sometimes happens, they seem to donate on top of it usually.

PS: Michael, was it my question on the facebook mental health EA post that prompted you to write this? Just curious.

Comment author: Evan_Gaensbauer 14 July 2017 08:09:15AM 0 points [-]

Talking about people in the abstract, or in a tone as some kind of "other", is to generalize and stereotype. Or maybe generalizing and stereotyping people others them, and makes them too abstract to empathize with. Whatever the direction of causality, there are good reasons people might take my comment poorly. There's lots of skirmishes online in effective altruism between causes, and I expect most of us don't all being lumped together in a big bundle, because it feels like under those circumstances at least a bunch of people in your inner-ingroup or whatnot will feel strawmanned. That's what my comment reads like. That's not my intention.

I'm just trying to be frank. On the Effective Altruism Forum, I try to follow Grice's Maxims because I think writing in that style heuristically optimizes the fidelity of our words to the sort of epistemic communication standards the EA community would aspire to, especially as inspired by the rationality community to do so. I could do better on the maxims of quantity and manner/clarity sometimes, but I think I do a decent job on here. I know this isn't the only thing people will value in discourse. However, there are lots of competing standards for what the most appropriate discourse norms are, and nobody is establishing to others how the norms will not just maximize the satisfaction of their own preferences, but maximize the total or average satisfaction for what everyone values out of discourse. That seems the utilitarian thing to do.

The effects of ingroup favouritism in terms of competing cause selections in the community don't seem healthy to the EA ecosystem. If we want to get very specific, here's how finely the EA community can be sliced up by cause-selection-as-group-identity.

  • vegan, vegetarian, reducetarian, omnivore/carnist
  • animal welfarist, animal liberationist, anti-speciesist, speciesist
  • AI safety, x-risk reducer (in general), s-risk reducer
  • classical utilitarian, negative utilitarian, hedonic utilitarian, preference utilitarian, virtue ethicist, deontologist, moral intuitionist/none-of-the-above
  • global poverty EAs; climate change EAs?; social justice EAs...?

The list could go on forever. Everyone feels like their representing not only their own preferences in discourse, but sometimes even those of future generations, all life on Earth, tortured animals, or fellow humans living in agony. Unless as a community we make an conscientious effort to reach towards some shared discourse norms which are mutually satisfactory to multiple parties or individual effective altruists, however they see themselves, communication failure modes will keep happening. There's strawmanning and steelmanning, and then there's representations of concepts in EA which fall in between.

I think if we as a community expect everyone to impeccably steelman everyone all the time, we're being unrealistic. Rapid growth of the EA movement is what organizations from various causes seem to be rooting for. That means lots of newcomers who aren't going to read all the LessWrong Sequences or Doing Good Better before they start asking questions and contributing to the conversation. When they get downvoted for not knowing the archaic codex that are evolved EA discourse norms, which aren't written down anywhere, they're going to exit fast. I'm not going anywhere, but if we aren't more willing to be more charitable to people we at first disagree with than they are to us, this movement won't grow. That's because people might be belligerent, or alarmed, by the challenges EA presents to their moral worldview, but they're still curious. Spurning doesn't lead to learning.

All of the above refers only to specialized discourse norms within just effective altruism. This would be on top of the complicatedness of effective altruists private lives, all the usual identity politics, and otherwise the common decency and common sense we would expect on posters on the forum. All of that can already be difficult for diverse groups of people as is. But for all of us to go around assuming the illusion of transparency makes things fine and dandy with regards to how a cause is represented without openly discussing it is to expect too much of each and every effective altruist.

Also, as of this comment, my parent comment above has net positive 1 upvote, so it's all good.

Comment author: Ben_Todd 14 July 2017 01:03:01AM 1 point [-]

Hi there,

I think basically you're right, in that people should care about comparative advantage to the degree that the community is responsive to your choices, and they're value-aligned with typical people in the community. If no-one is going to change their career in response to your choice, then you default back to whatever looks highest-impact in general.

I have a more detailed post about this, but I conclude that people should consider all of role impact, personal fit and comparative advantage, where you put or less emphasis on comparative advantage compared to personal fit given certain conditions.

Comment author: Bill123 13 July 2017 07:59:53PM -2 points [-]
 HOW DR AKUBA RESTORE MY LIFE BACK WITH HIS HERBAL MEDICINE. CONTACT DR AKUBA FOR YOUR OWN HERPES CURE VIA....Hello every one,my name is lily ecko.After 15 year of our faithful marriage my husband still cheated on me.i never knew my husband have herpes and he never tell me his wife that he has herpes,until i was noticing some changes in my body such as itching,tingling and burning:before the blister appear.so i decided to visit the hospital,and the doctor confirm me herpes positive.i was crying at the hospital,Because for Good 15 years of our marriage i have being faithful to my husband i never cheated on him.When i got home i told him that he cheated on me he denied to me.I ask him if he did not cheated on me, how come i contacted herpes.i was searching the internet on how to get the Herpes cure,i meet several comment.but i find out that most people who has being cure from this same herpes are all talking about doctor omole herpes cure.so i mail him about my cure too and he reply my mail that same day,and i explain my present predicament to him and how i manage to get his contact.he request for my personal info wish i send to him.he told me that the herbal medicine is free,it not for sales.after much conversation,he send me the herbal medicine wish me and my husband use for 8 days as we were told by him.before that 8 days complete i found not that all the pain,itching e.t.c. were all gone.after 10 days me and my husband visit the hospital,what marvel me was that doctor confirm me and my husband herpes negative.i am here to tell the whole world to help me to thanks doctor omole for restoring my life back to me and my family and to also let them know that there is cure for herpes although it has not yet being approve by the Government.if you are having herpes,cancer,epilepsy,ulcer or stroke e.t.c kindly email him via his name is DR AKUBA contact him on drakubaspelltemple@yahoo.com...WhatsApp him on his number +1573 206-9284 or get9261@gmail.com
Comment author: JanBrauner 13 July 2017 06:20:44PM *  5 points [-]

With regards to "following your comparative advantage":

Key statement : While "following your comparative advantage" is beneficial as a community norm, it might be less relevant as individual advice.

Imagine 2 people Ann and Ben. Ann has very good career capital to work on cause X. She studied a relevant subject, has relevant skill, maybe some promising work experience and a network. Ben has very good career capital to contribute to cause Y. Both have aptitude to become good at the other cause as well, but it would take some time, involve some cost, maybe not be as save.

Now Ann thinks that cause Y is 1000 times as urgent as cause X, and for Ben it is the other way around. Both consider retraining for the cause they think is more urgent.

From a community perspective, it is reasonable to promote the norm that everyone should follow their comparative advantage. This avoids prisoner's dilemma situations and increases total impact of the community. After all, the solution that would best satisfy both Ann's and Ben's goals was if each continued in their respective areas of expertise. (Let's assume they could be motivated to do so)

However, from a personal perspective, let's look at Ann's situation: In reality of course, there will rarely be a Ben to mirror Ann, who would also be considering retraining at exactly the same time as Ann. And if there was, they would likely not know each other. So Ann is not in the position to offer anyone the specific trade that she could offer Ben, namely: "I keep contributing to cause X, if you continue contributing to cause Y"

So these might be Ann's thoughts: "I really think that cause Y is much more urgent than anything I could contribute to cause X. And yes, I have already considered moral uncertainty. If I went on to work on cause X, this would not directly cause someone else to work on cause Y. I realize that it is beneficial for EA to have a norm that people should follow their comparative advantage, and the creation of such a norm would be very valuable. However, I do not see how my decision could possibly have any effect on the establishment of such a norm"

So for Ann it seems to be a prisoner’s dilemma without iteration, and she ought to defect.

I see one consideration why Ann should continue working towards cause X: If Ann believed that EA is going to grow a lot, EA would reach many people with better comparative advantage for cause Y. And if EA successfully promoted said norm, those people would all work on cause Y, until Y would not be neglected enough any more to be much more urgent than cause X. Whether Ann believes this is likely to happen depends strongly on her predictions of the future of EA and on the specific characteristics of causes X and Y. If she believed this would happen (soon), she might think it was best for her to continue contributing to X. However, I think this consideration is fairly uncertain and I would not give it high weight in my decision process.

So it seems that - it clearly makes sense (for CEA/ 80000 hours/ ...) to promote such a norm - it makes much less sense for an individual to follow the norm, especially if said indiviual is not cause agnostic or does not think that all causes are within the same 1-2 orders of magnitude of urgency.

All in all, the situation seems pretty weird. And there does not seem to be a consensus amongst EAs on how to deal with this. A real world example: I have met several trained physicians who thought that AI safety was the most urgent cause. Some retrained to do AI safety research, others continued working in health-related fields. (Of course, for each individual there were probably many other factors that played a role in their decision apart from impact, e.g. risk aversion, personal fit for AI safety work, fit with the rest of their lives, ...)

Ps: I would be really glad if you could point me to errors in my reasoning or aspects I missed, as I, too, am a physician currently considering retraining for AI safety research :D

Pps: I am new to this forum and need 5 karma to be able to post threads. So feel free to upvote.

Comment author: Wei_Dai 13 July 2017 05:06:54PM *  5 points [-]

I can't really think of anyone more familiar with MIRI's work than Paul who isn't already at MIRI (note that Paul started out pursuing MIRI's approach and shifted in an ML direction over time).

The Agent Foundations Forum would have been a good place to look for more people familiar with MIRI's work. Aside from Paul, I see Stuart Armstrong, Abram Demski, Vadim Kosoy, Tsvi Benson-Tilsen, Sam Eisenstat, Vladimir Slepnev, Janos Kramar, Alex Mennen, and many others. (Abram, Tsvi, and Sam have since joined MIRI, but weren't employees of it at the time of the Open Phil grant.)

That being said, I agree that the public write-up on the OpenAI grant doesn't reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward

I had previously seen some complaints about the way the OpenAI grant was made, but until your comment, hadn't thought of a possible group blind spot due to a common ML perspective. If you have any further insights on this and related issues (like why you're critical of deep learning but still think the grant to OpenAI was a pretty good idea, what are your objections to Paul's AI alignment approach, how could Open Phil have done better), would you please write them down somewhere?

Comment author: jsteinhardt 13 July 2017 03:16:20PM 4 points [-]

(Speaking for myself, not OpenPhil, who I wouldn't be able to speak for anyways.)

For what it's worth, I'm pretty critical of deep learning, which is the approach OpenAI wants to take, and still think the grant to OpenAI was a pretty good idea; and I can't really think of anyone more familiar with MIRI's work than Paul who isn't already at MIRI (note that Paul started out pursuing MIRI's approach and shifted in an ML direction over time).

That being said, I agree that the public write-up on the OpenAI grant doesn't reflect that well on OpenPhil, and it seems correct for people like you to demand better moving forward (although I'm not sure that adding HRAD researchers as TAs is the solution; also note that OPP does consult regularly with MIRI staff, though I don't know if they did for the OpenAI grant).

Comment author: Julia_Wise 13 July 2017 03:00:39PM 0 points [-]

I edited to clarify that I meant members of GWWC, not EAs in general.

Comment author: Wei_Dai 13 July 2017 11:37:14AM 5 points [-]

That actually didn't cross my mind before, so thanks for pointing it out. After reading your comment, I decided to look into Open Phil's recent grants to MIRI and OpenAI, and noticed that of the 4 technical advisors Open Phil used for the MIRI grant investigation (Paul Christiano, Jacob Steinhardt, Christopher Olah, and Dario Amodei), all either have a ML background or currently advocate a ML-based approach to AI alignment. For the OpenAI grant however, Open Phil didn't seem to have similarly engaged technical advisors who might be predisposed to be critical of the potential grantee (e.g., HRAD researchers), and in fact two of the Open Phil technical advisors are also employees of OpenAI (Paul Christiano and Dario Amodei). I have to say this doesn't look very good for Open Phil in terms of making an effort to avoid potential blind spots and bias.

Comment author: Cornelius  (EA Profile) 13 July 2017 09:04:20AM 0 points [-]

I see every day the devastating economic harm that organizations like the Against Malaria Foundation wreak on communities.

Make a series of videos about that instead then if it's so prevalent. It would serve to undermine GiveWell far more and strengthen your credibility.

Your video against GiveWell does not address or debunk any of GiveWell's evidence. It's a philosophical treatise on GiveWell's methods not an evidence-based treatise. Arguing by analogy based on your own experience is not evidence. I've been robbed 3 times living in Vancouver and yet zero times in Africa, despite living in Namibia/South Africa for most of my life. This does not however entail that Vancouver is more dangerous. I in fact have near-zero evidence to back up the claim that Vancouver is more dangerous

All of your methodology objections (and far stronger anti-EA arguments) were systematically raised in Iason Gabriel’s piece on criticisms of effective altruism. And all of these criticisms were systematically responded to and found lacking by Halstead et al's defense paper

I'd highly recommend reading both of these. They are both pretty bad ass.

Comment author: Linch 13 July 2017 05:56:57AM 0 points [-]

This seems like a perfectly reasonable comment to me. Not sure why it was heavily downvoted.

Comment author: MichaelPlant 12 July 2017 04:10:45PM 0 points [-]

I hadn't thought the TLYCS as an/the anti-poverty org. I guess I didn't think about it as they're not so present in my part of the EA blogsphere. Maybe it's less of a problem if there are at least charities/orgs to represent different world views (although this would require quite a lot of duplication of work so it's less than ideal).

Comment author: Michelle_Hutchinson 12 July 2017 11:13:17AM 7 points [-]

You might think The Life You Can Save plays this role.

I've generally been surprised over the years by the extent to which the more general 'helping others as much as we can, using evidence and reason' has been easy for people to get on board with. I had initially expected that to be less appealing, due to its abstractness/potentially leading to weird conclusions. But I'm not actually convinced that's the case anymore. And if it's not detrimental, it seems more straightforward to start with the general case, plus examples, than to start with only a more narrow example.

Comment author: Michael_PJ 12 July 2017 09:43:49AM 1 point [-]

Hm, I'm a little sad about this. I always thought that it was nice to have GWWC presenting a more "conservative" face of EA, which is a lot easier for people to get on board with.

But I guess this is less true with the changes to the pledge - GWWC is more about the pledge than about global poverty.

That does make me think that there might be space for an EA org that explicitly focussed on global poverty. Perhaps GiveWell already fills this role adequately.

Comment author: Ben_Todd 12 July 2017 04:44:03AM 0 points [-]

That's right - we mention it as a cause to work on. That slipped my mind since that article was added only recently. Though I think it's still true we don't give the impression of representing the EA movement.

View more: Prev | Next