In response to EAGx Relaunch
Comment author: Evan_Gaensbauer 04 August 2017 07:17:55AM 1 point [-]

What's the timeframe in which CEA will be accepting applications to host/organize EAGx events?

Comment author: Linch 13 July 2017 05:56:57AM 0 points [-]

This seems like a perfectly reasonable comment to me. Not sure why it was heavily downvoted.

Comment author: Evan_Gaensbauer 14 July 2017 08:09:15AM 0 points [-]

Talking about people in the abstract, or in a tone as some kind of "other", is to generalize and stereotype. Or maybe generalizing and stereotyping people others them, and makes them too abstract to empathize with. Whatever the direction of causality, there are good reasons people might take my comment poorly. There's lots of skirmishes online in effective altruism between causes, and I expect most of us don't all being lumped together in a big bundle, because it feels like under those circumstances at least a bunch of people in your inner-ingroup or whatnot will feel strawmanned. That's what my comment reads like. That's not my intention.

I'm just trying to be frank. On the Effective Altruism Forum, I try to follow Grice's Maxims because I think writing in that style heuristically optimizes the fidelity of our words to the sort of epistemic communication standards the EA community would aspire to, especially as inspired by the rationality community to do so. I could do better on the maxims of quantity and manner/clarity sometimes, but I think I do a decent job on here. I know this isn't the only thing people will value in discourse. However, there are lots of competing standards for what the most appropriate discourse norms are, and nobody is establishing to others how the norms will not just maximize the satisfaction of their own preferences, but maximize the total or average satisfaction for what everyone values out of discourse. That seems the utilitarian thing to do.

The effects of ingroup favouritism in terms of competing cause selections in the community don't seem healthy to the EA ecosystem. If we want to get very specific, here's how finely the EA community can be sliced up by cause-selection-as-group-identity.

  • vegan, vegetarian, reducetarian, omnivore/carnist
  • animal welfarist, animal liberationist, anti-speciesist, speciesist
  • AI safety, x-risk reducer (in general), s-risk reducer
  • classical utilitarian, negative utilitarian, hedonic utilitarian, preference utilitarian, virtue ethicist, deontologist, moral intuitionist/none-of-the-above
  • global poverty EAs; climate change EAs?; social justice EAs...?

The list could go on forever. Everyone feels like their representing not only their own preferences in discourse, but sometimes even those of future generations, all life on Earth, tortured animals, or fellow humans living in agony. Unless as a community we make an conscientious effort to reach towards some shared discourse norms which are mutually satisfactory to multiple parties or individual effective altruists, however they see themselves, communication failure modes will keep happening. There's strawmanning and steelmanning, and then there's representations of concepts in EA which fall in between.

I think if we as a community expect everyone to impeccably steelman everyone all the time, we're being unrealistic. Rapid growth of the EA movement is what organizations from various causes seem to be rooting for. That means lots of newcomers who aren't going to read all the LessWrong Sequences or Doing Good Better before they start asking questions and contributing to the conversation. When they get downvoted for not knowing the archaic codex that are evolved EA discourse norms, which aren't written down anywhere, they're going to exit fast. I'm not going anywhere, but if we aren't more willing to be more charitable to people we at first disagree with than they are to us, this movement won't grow. That's because people might be belligerent, or alarmed, by the challenges EA presents to their moral worldview, but they're still curious. Spurning doesn't lead to learning.

All of the above refers only to specialized discourse norms within just effective altruism. This would be on top of the complicatedness of effective altruists private lives, all the usual identity politics, and otherwise the common decency and common sense we would expect on posters on the forum. All of that can already be difficult for diverse groups of people as is. But for all of us to go around assuming the illusion of transparency makes things fine and dandy with regards to how a cause is represented without openly discussing it is to expect too much of each and every effective altruist.

Also, as of this comment, my parent comment above has net positive 1 upvote, so it's all good.

Comment author: DavidNash 10 July 2017 09:04:47PM 0 points [-]

Might it be that 80k recommend X-risk because it's neglected (even within EA) and that if more then 50% of EAs had X-risk as their highest priority it would no longer be as neglected?

Comment author: Evan_Gaensbauer 10 July 2017 09:47:05PM 1 point [-]

I don't think that'd be the case as from inside the perspective of someone already prioritizing x-risk reduction, it can appear that cause is at least thousands of times more important than literally anything else. This is based on an idea formulated by philosopher Nick Bostrom: astronomical stakes (this is Niel Bowerman in the linked video, not Nick Bostrom). The ratio x-risk reducers think is appropriate for resources dedicated to x-risk relative to other causes is arbitrarily high. Lots of people think the argument is missing some important details, or ignoring major questions, but I think from their own inside view x-risk reducers probably won't be convinced by that. More effective altruists could try playing the double crux game to find the source of disagreement about typical arguments for far-future causes. Otherwise, x-risk reducers would probably maintain in the ideal as many resources as possible ought be dedicated to x-risk reduction, but in practice may endorse other viewpoints receiving support as well.

Comment author: lukeprog 24 June 2017 11:04:21PM 1 point [-]

Sure. In that case, I won't reply to them (if they aren't posted directly to the AMA) until the AMA is "winding down," or something.

Comment author: Evan_Gaensbauer 25 June 2017 07:40:09AM 0 points [-]

Hey, that sounds great to me. Thanks. Here's my question.

Do you think science or philosophy can meaningfully separate the capacity to experience suffering or pain from however else consciousness is posited to be distributed across species? What would be fruitful avenues of research for effective altruists to pursue if it's possible to solve the problem in the first question, without necessarily addressing whatever remains of consciousness?

Comment author: Evan_Gaensbauer 24 June 2017 10:13:07AM 0 points [-]

May we submit questions here to be asked on our behalf if we don't think we'll be free on Wednesday to ask during the AMA live?

Comment author: Evan_Gaensbauer 15 June 2017 07:15:37AM 1 point [-]

There are ongoing controversies in EA, even if they're not obvious. That is, there are lots of ongoing debates EA that flare up occasionally, but are unresolved in terms of what concessions different effective altruists think our community ought be willing to make. I might cover some of those in the near future, and I'll cite this blog post. This is valuable in that I'd be covering object-level controversies, and having the outline of an argument established on the forum here in a neutral fashion beforehand will be helpful. Thanks for writing this.

In response to Political Ideology
Comment author: Evan_Gaensbauer 28 May 2017 06:11:32AM *  0 points [-]

It seems if the utility of a political ideology is based on a sort of mindset that generates outcomes which produce the greatest average well-being, but getting involved in political party machines to change society with policy isn't efficient enough a process, then getting upstream of politics to what produces cultural changes allowing these attitudes to permeate democratic societies which engender and incentivize, all other things being equal, optimal lifestyle outcomes is the best thing to do.

Societies throughout Africa and most of Asia still seem largely religious or traditional, while societies in the Americas, Europe, East Asia and Australia are more secular and have a stronger civic culture generating their values. I think there's a state of heightened tension in political across the world today, and the low-level of extremists which typically blend in with broader partisan coalitions are exploiting the opportunity to push an anti-humanistic agenda. Part of this is upstream of violence paving the way for it involves dehumanizing one's political opponents with propaganda. That's something that's happening on the political fringes in North America today. There's lots of allegations reported in the news of this in Europe as well, but honestly what counts as reliable from this far overseas where I am I don't know. News might travel far, but accurate reporting doesn't. The accusations of political violence in North America I know are based on multiple local news sources, eye-witness accounts, video footage, and things one can plainly see with one's own eyes by visiting these places. This has been my experience living in Vancouver, Canada, having lived on the west coast my whole life. I guess it's like that regionally all over the map, but virtually any blog or news source beyond the local level distorts real events to the point it's practically impossible to substantiate any allegations.

However, it's also been my experience most people repudiate all this type of behaviour, and it appears to be staying out of the party machine. I can't find the link now, but I read a blog post written by economist Bryan Caplan about as much hype as there is in the news, what boils down to apparently no more than a few thousand extremists on either side of the political divide in North America doesn't pose much of a real threat to a society of hundreds of millions who'd sooner turn such people out then condone their behaviour. That's the gist of the argument, and I found it more or less convincing. I mean, certainly, everyone ought condemn the instigation of political violence, and not tolerate propaganda that would promote such as legitimate means of activism either. However, all I'm saying is it doesn't hold up as an argument against getting involved in politics as a form of effective altruism.

I think this is a major consideration against political involvement in the world today. This speaks to what I know to be true of North America. I think in the other continents I mentioned where political action is a viable way to improve human well-being at this point in history, one will have to defer to someone who knows better. I definitely encourage people with that sort of knowledge to speak up. I only know enough to speak about politics in North America. However, I think the inefficiency of political involvement or activism is so great that alone demonstrates for most people and movements the opportunity costs of diverting resources to politics will be too high.

In the Americas, there's still a sufficient percentage of deeply religious people among the population of democratic societies. It's a commonly accepted belief the political base inculcating the attitudes towards some superior well-being outcomes through policy action and thus electoral politics and political activism, is the socially conservative religious right. However, based on effective altruism taking a stance of moral pluralism, there are certain shibboleths of religious/social conservatism much of the religious public wouldn't be willing to compromise on which will necessarily remain tolerated in the EA community. So, getting involved in that sort of political action is impracticable from a realistic EA perspective at present.

However, there are plenty of organizations and communities in and around the effective altruism movement which are taking an approach to engendering changes in cultural attitudes to set the stage for later policy reform activism in politics. Sentience Politics is currently doing this work in German-speaking Europe, and is currently trying to expand into other countries. The Life You Can Save and Giving What We Can function as projects that do so, and funding movement growth of EA to promote and spread values which in practice lead to significant lifestyle change are abundant. One cause which stands out as not correlated with favouring cultural change is x-risk reduction. However, the base of support for that cause also largely comes out of the rationality movement. There are lots of people in the rationality community who are already supporting, promoting and creating such projects aimed at changing society in a manner upstream or outside of politics. I haven't talked to anyone enough to figure out which projects would appeal best or most to effective altruism in this regard.

Comment author: Evan_Gaensbauer 24 March 2017 09:22:31PM 4 points [-]

Upvoted. Maybe this is just what's typical for academic style, but when you address Gabriel it seems you're attributing the points of view presented to him, while when I read the original paper I got the impression he was collating and clarifying some criticisms of effective altruism so they'd be coherent enough they'd affect change. One thing about mainstream journalism is it's usually written for a specific type of audience the editor has in mind, even if the source claims to be seeking a general audience, and so they spin things a certain way. While criticisms of EA definitely aren't what I'd call sensationalistic, they're written in a style of a rhetorical list of likes and dislikes about the EA movement. It's taken for granted the implied position of the author is somehow a better alternative than what EA is currently doing, as if no explanation is needed.

Gabriel fixes this by writing the criticisms of EA up in a way that we'd understand what about the movement would need to change to satisfy critics, if we were indeed to agree with critics. Really, except for the pieces published on the Boston Review, I feel like other criticisms of EA were written not for EA at all, but rather a review of EA for other do-gooders as a warning to stay away from the movement. It's not the job of critics to solve all our problems for us, but being a movement that is at least willing to try to change in the face of criticism, it's frustrating nobody takes us up on the opportunity given what blindspots we may have and tries to be constructive.

In response to Open Thread #36
Comment author: Evan_Gaensbauer 17 March 2017 03:26:28AM 3 points [-]

As people age their lives become more difficult. Physically and mentally, they just aren't where they previously were. Most effective altruists are younger people, and they may not take into consideration how risky it can be to not have any savings cushion in the case things change. We can't necessarily count on pension plans to cover us in our old age. We can't assume our health will always be what it is now. A lot of people will face harder times in the future, and being put in the mindset of assuming one won't face personal hardship, so one need not save money, is reckless.

It's one thing if someone aspires to be wealthy, retire at age 30 like Mr. Money Mustache, or live a luxurious retirement. But it's dangerous to create a culture in EA where people might be accused of hypocrisy to even save enough for retirement to cover their own basic living expenses. It's also dangerous for us to presume that each of our lives will go so easily we can work until we die, or we won't get sick. While talking about these things in the abstract may be well and fine, I want to register my conviction using social influence, i.e., peer pressure, alone to normalize "don't/no need to save for retirement" as practical advice among effective altruists is potentially dangerous.

Comment author: RobBensinger 07 February 2017 10:36:23PM -1 points [-]

Anonymous #2:

I'd prefer it if more people in EA were paid on a contract basis, if more people were paid lower salaries, if there were more mechanisms for the transfer of power in organizations (e.g., a 2- or 3-year term limit for CEOs and a maximum age at entry), and if there were more direct donations. Also: better systems to attract young people. More people in biology. More optimism. More willingness to broadcast arguments against working on animal welfare that have not been refuted.

Comment author: Evan_Gaensbauer 22 February 2017 05:48:12AM 1 point [-]

I originally downvoted this comment, because some of the suggestions obviously suck, but some of the points here could be improved.

I'd prefer it if more people in EA were paid on a contract basis.

There are a lot of effective altruists who have just as good ideas as anyone working at an EA non-profit, or a university, but due to a variety of circumstances, they're not able to land those jobs. Some effective altruists already run Patreons for their blogs, and I think the material coming out of them is decent, especially as they can lend voices independent of institutions on some EA subjects. Also, they have the time to cover or criticize certain topics other effective altruists aren't since their effort is taken up by a single research focus.

I'd prefer it if more people in EA were paid on a contract basis.

Nothing can be done about this criticism if some numbers aren't given. Criticizing certain individuals for getting paid too much, or criticizing certain organizations for paying their staff too much, isn't an actionable criticism unless one gets specific. I know EA organizations whose staff, including the founders who decide the budget, essentially get paid minimum wage. On the other hand, Givewell's cofounders Holden and Elie get paid well into the six figures each year. While I don't myself much care, I've privately chatted with people who perceive this as problematic. Then, there may be some staff at some EA organizations who may appear to others to get paid more than they deserve, especially when their salaries may be able to pay for one or more full-time salaries as other individuals perceived to be just as competent. That last statement was full of conditionals, I know, but it's something I'm guessing they anonymous commenter was concerned about.

f there were more mechanisms for the transfer of power in organizations (e.g., a 2- or 3-year term limit for CEOs and a maximum age at entry),

Again, they'd need to be specific about what organization they're talking about. The biggest problem with this comment is the commenter made broad, vague generalizations which aren't actionable. It's uncomfortable to make specific criticisms of individuals or organizations, yes, but the point of an anonymous criticism is to be able to do that if it's really necessary with virtual impunity, while bad commentary which are more or less character assassinations can easily be written off without a flamewar ensuing, or feelings not getting as hurt.

Anyway, I too can sympathize with demands for more accountability, governance and oversight at EA organizations. For example, many effective altruists have been concerned time and again with the influence of major organizations like the Centre for Effective Altruism which, even if its not their intent, may be perceived to represent and speak for the movement as a whole. This could be a problem. However, while EA need not only be a social movement predicated on and mediated through registered NPOs, it by and large is and will continue to be in practice, as many social movements which are at all centralized are. Making special asks for changes in governance at these organizations to become more democratic without posting to the EA Forum directly and making the suggestions consistent with how NPOs at all operate in a given jurisdiction will just not result in change. These suggestions really stand out considering they're more specific than I've seen anyone call for, as if this is a desperate problem in EA, when at most I've seen similar sentiments at most expressed as vague concerns on the EA Forum.

and if there were more direct donations.

The EA Forum and other channels like the 'Effective Altruism' Facebook group appear dominated by fundraisers and commentary on and from metacharities because those are literally some of the only appropriate outlets for metacharities to fundraise or to publish transparency reports. Indeed, that they're posting material besides fundraisers beyond their own website is a good sign, as it's the sort of transparency and peer review the movement at large would demand of metacharities. Nonetheless, between this and constant chatter about metacharities on social media, I can see how the perception most donations are indirect and go to metacharities arises. However, this may be illusory. The 2015 EA Survey, the latest date for which results are available, show effective altruists overwhelmingly donate to Givewell's recommended charities. Data isn't available on the amounts of money self-identified effective altruists are moving to each of these given charities. So, it's possible lots of effective altruists earning to give are making primarily indirect donations. However, anecdotally, this doesn't seem to be the case. If one wants to make that case, and then mount a criticism based on it, one must substantiate it with evidence.

View more: Next