Comment author: turchin 19 April 2018 08:48:09PM 1 point [-]

I am puzzled by the value of non-born animals in this case. Ok, less chicken will be born and later culled, but it means that some chickens will never be born at all. In extreme case, the whole species of farm chicken could go extinct if there will be no meet consumption.

Comment author: adamaero  (EA Profile) 20 April 2018 01:07:59AM *  3 points [-]

Following a preference utilitarian system, you are correct. Hare discusses this in, Why I Am only a Demi-vegetarian. Singer also mention it in, Singer and His Critics.

Although, that's not the reality today (in the US at least). Unethical living conditions, such at battery cages for chickens or a short life confined to small pens for other livestock is the point. No such being wants to suffer unnecessarily. On the other hand, if factory farming was like ol' MacDonald farm, then sure. Kind of a paradox...

Comment author: Sanjay 21 March 2018 12:49:11PM 0 points [-]

Really? "doing as much good as possible" is confusing people? I tend to use that language, and I haven't noticed people getting confused (maybe I haven't been observant enough!)

Comment author: adamaero  (EA Profile) 22 March 2018 12:36:24AM 1 point [-]

Aren't you going further from the definition though?

Any short definition about EA by itself I find to be abstract. Most people I encounter assume it's about doing as much good small things as possible--or worse that it's a political philosophy (red/blue thinking). It's only when I give examples of myself or ask what their cause interests could be that they slowly break away from the abstract dictionary definitions.

Comment author: Jeffhe  (EA Profile) 17 March 2018 09:57:59PM *  0 points [-]

Hey adamaero,

I agree that reasons change! But I would be curious what your current reason is :P (don't worry if you don't want to say)

Also, can you tell me which count as justifications and which count as reasons for you, and the difference between a reason and a justification for you?

I understand myself to be using the word 'reason' to mean cause here, but 'reason' can also be used to mean justification since in everyday parlance, it is a pretty loose term. Something similar can be said for the words 'why' and 'because'.

As I see it, the real distinction is between a cause and a justification. We all more-or-less know what someone means when they say X is the cause of Y. However justification is less clear, so I want to share my understanding of justification (so you know where my mind is at).

As I see it, Y (e.g. an action or belief or piece of legislation) requires justification ONLY IF it is held to some standard (perhaps an implicit one). That which does the justifying (i.e. X) does it by showing how X in fact meets that standard. Take a CEO's actions. They are held (by shareholders and others) to the standard of being conducive to the success of the business. If it is unclear to them how one of the CEO's recent actions (say, laying off a rather effective employee) is good for the business, they might ask the CEO to justify his action. The CEO might then say that he was made aware that that employee was planning to leak company secrets. In saying this, he is offering a fact that shows how his action meets the standard it is held to.

Note that it follows from this understanding of justification that justification is subjective in the sense that justification is always justification TO SOMEONE. If you and I hold Y to different standards, then when presented with X, Y may be justified TO YOU, though it remains justified to me. And someone who doesn't hold Y to any standard won't even ask for a justification of it in the first place.

Note also that for many things, it makes sense to ask both for a cause and a justification, like actions and beliefs. But since almost everything has a cause, but relatively few things are held to a standard (implicit or explicit), questions of cause occur more.

Finally note that cause and justification can interact in various ways. For example, a person might believe that a certain act is justified, and that belief in conjunction with a desire to act in a justified way may cause him to act in that way.

I've never shared these views about justification with anyone but a close friend. So it would be interesting to know if your view is the same.

Having said all that, I admit I could have made certain of the reasons I listed sound more "cause-y" (maybe 1 and 2). Are those the ones you're concerned about?

Comment author: adamaero  (EA Profile) 18 March 2018 03:53:29PM *  0 points [-]

I do not mean "the reason" can change--I just do not think you can reduce someone's worldview, Weltanschauung, into one simple reason (unless maybe for #6).

Regardless, I don't think a survey here would be representative anyway.

Comment author: adamaero  (EA Profile) 17 March 2018 08:13:05PM *  0 points [-]

Sorry, I cannot choose one. Reasons change. There was never a be-all end-all reason for me.
(Also, a few of these are justifications instead of actual reasons.)

For lack of a better* English word, vicissitude (natural change visible in nature or in human affairs) comes closest to why I refuse to choose "the reason." It doesn't truly exist ;)

*Vicissitude usually has a negative connotation.


2. 3. 4. 1 ≡ 8 ∴ 9.

Comment author: DustinWehr 12 March 2018 06:13:32PM 1 point [-]

Good points. I don't think "(benevolence)"/"(beneficence)" adds anything, either. Beneficence is effectively EA lingo. You're not going to draw people in by teaching them lingo. Do that a little further into on-boarding.

Comment author: adamaero  (EA Profile) 12 March 2018 07:06:59PM 0 points [-]

I'm glad you said so. From now on I'll use well-meaning/ good intentions, and evidence-based good instead.

Comment author: adamaero  (EA Profile) 10 March 2018 04:12:50PM *  5 points [-]

Thanks. This will be useful for a future presentation. Although, I am going to modify challenges 3-6. Using the word "utilitarian" seems...limiting. EA has utilitarian/consequentialist underpinnings--but not a full blown subscription to only that moral system (i.e., not exclusive). But I'm sure you knew that already. (See Macaskill's comment on 'Effective Altruism' as utilitarian equivocation.)

Off the top of my head, I'm thinking something more along the lines as maximizing impact and the empathy-altruism hypothesis related to meaning well (benevolence) versus actually doing good (beneficence). (Additionally, going to add an outline =)

Also, the slide about Effective Altruism as a movement, founded in 2011? I'm guessing that's for 80k Hours because GWWC has been around since 2009, and the main idea has been around since at least 1972.

Comment author: adamaero  (EA Profile) 06 March 2018 04:14:23PM *  1 point [-]

Side note - Have you looked at the Wikipedia pages for Effective Altruism in different languages and translated to [English]?

Examples, sv.wikipedia.org/wiki/Effektiv_altruism ~ marginal impact, opartic thinking, contraceptive thinking. Es.wikipedia.org ~ comparative wealth, etc.

Just something someone here may find interesting.

Comment author: adamaero  (EA Profile) 05 March 2018 09:47:56PM *  0 points [-]

Eventually, though, I worked out a diet plan that would be both healthy and easy to follow.

So do you have that diet plan? Please link.


Related - Hey vegans, what are the easiest (least prep) three-per-day meal plan for a week?

Comment author: adamaero  (EA Profile) 23 February 2018 03:38:14AM *  1 point [-]

Please know, I am not being critical, just genuinely curious.

"We expect to have a particular emphasis on funding groups aiming to transition from being run by volunteers to being run by full-time, paid organizers." Why? What more can a paid organizer do?

I'm thinking about myself, and I don't see how paying me would significantly increase my time related to EA advocacy. For example, I plan to put up college student tailored posters in the academic buildings. After that, speaking to several large lecture halls before class starts (given permission from each prof). Although, in retrospect, I am more of an average joe EA (E2G on the brink of going from the GWWC 1% student minimum to the professional donation, 10%, and investing the rest).

$5k for renting out a facility? $100k for a group for what? A bigger facility? Or is it more like those fancy $500-a-plate dinners? Is there an EA organizer who's put on a benefit-type dinner before? I mean, I presume that putting on such events need money to start with...

Comment author: adamaero  (EA Profile) 22 February 2018 02:36:22AM *  -3 points [-]

@Matthew_Barnett As a senior electrical engineering student, proficient in a variety of programming languages, I do think and believe that AI is important to think about and discuss. The theoretical threat of a malevolent strong AI would be immense. But that does not mean one has cause or a valid reason to support CS grad students financially.

A large, significant, asteroid collision with Earth would also be quite devastating. Yet, to fund and support aerospace grads does not follow. Perhaps I really mean this: AI safety is an Earning to Give non sequitur.

Lastly, again, there is no evidence or results. Effective Altruism is about being beneficent instead of merely benevolent (meaning well). In other words, making decisions off well researched initiatives (e.g., bed nets). Since strong AI does not exist, it does not make sense to support though E2G. (I'm not saying it will never exist; that is unknown.) Of course, there are medium-term (systematic change) with results that more or less rely on historical-type empiricism--but that's still some type of evidence. For poverty we have RCTs and developmental economics. For AI safety [something?]. For animal suffering we have proof that less miserable conditions can become a reality.

View more: Next