6

kbog comments on Near-Term Effective Altruism Discord - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: kbog  (EA Profile) 10 September 2018 03:46:43AM *  0 points [-]

All three of those are merely cases of you disagreeing with my claims or my confidence in them. I thought I was being tone-policed, but you are just saying that I am wrong.

Too many times on Facebook groups, I have to see local events that I can't attend.

The fact that people are unable to attend something is one of the problems with the server that is being promoted here. I'm not in favor of anything in EA that does this, if someone ever tries to exclude near-term EAs from their event then give me a ping and I will argue with them too!

Too many times I see EA posts that have no relevance to my involvement in EA.

Theoretical physicists are not upset by the presence of discussion on experimental physics, and the ones who disbelieve in dark matter are not upset by the presence of discussion from people who do. If lots of posts aren't relevant to you, the right answer is presumably to ignore those posts; I and so many other EAs do it all the time, it's easy.

If you want more content that is relevant to you... that's perfect! Make it! Request it! Ask questions about it! Be the change that you wish to see in the world.

Perhaps think about it like the difference between the Physics Stack Exchange chat and the Electrical Engineering (EE) Stack Exchange chat. They're very close to the same. EE is based in physics obviously. But they're separate.

The physics stack exchange doesn't try to exclude engineers, and they didn't make it because they thought that engineers were "alienating"; if they operated on that basis then it would create unnecessary annoyance for everyone. They separate because they are different topics, with different questions that need to be answered, and the skills and education which are relevant to one are very different from those that matter for another. But "near-term Effective Altruism", just like "long-term Effective Altruism", is a poorly specified bundle of positions with no common methodological thread. The common thread within each bundle is not any legitimate underlying presupposition about values or methodology that may form the foundation for further inquiry, it is an ex post facto conclusion that the right cause is something that happens to be short- or long-term. And while some cause conclusions could form a meaningful basis for significant further inquiry (e.g., you selected poverty as a cause, so now you just want to talk about poverty relief), the mere conclusion that the right cause is something that matters in the near or long term does not form any meaningful basis, because there is little in the way of general ideas, tools, resources, or methodologies which matter greatly for one bundle of causes but not the other.

But not only is the original analogy with physics and engineering relevantly incorrect, it's specifically pernicious, because many EAs already implicitly have the misconception that supporting near-term or long-term causes is a matter of philosophical presupposition or overarching methodology; in fact it is probably the greatest confusion that EAs have about EA and therefore it wouldn't be wise to reinforce it.

Comment author: adamaero  (EA Profile) 11 September 2018 08:51:51PM *  0 points [-]

@kbog: Most of your responses with respect to my reply do not make sense. Example, EA Chicago posts their events on the Facebook page. I don't live in Chicago...(simple as that)

The physics stack exchange doesn't try to exclude engineers

~ completely missed the point. Additionally, the analogy is fine. There is seldom such a thing as an absolute analogy. With that, it doesn't follow that somehow the analogy is wrong related to these elusively implicit misconceptions by EAs about EAs.

So to sum up, you're reading in way too far to what I wrote originally. I was answering your question related to why your first reply was "harsher than necessary".

Comment author: kbog  (EA Profile) 11 September 2018 09:39:13PM *  -1 points [-]

EA Chicago posts their events on the Facebook page. I don't live in Chicago...(simple as that)

OK, but has nothing to do with whether or not we should have this discord server... why bring it up? In the context of your statements, can't you see how much it looks like someone is complaining that there are too many events that only appeal to EAs who support long-term causes, and too few events for EAs who support near-term causes?

~ completely missed the point. Additionally, the analogy is fine. There is seldom such a thing as an absolute analogy

It's not that the analogy was not absolute, it's that it was relevantly wrong for the topic of discussion. But given that your argument doesn't seem to be what I thought it was, that's fine, it could very well be relevant for your point.

I was answering your question related to why your first reply was "harsher than necessary".

I figured that "harsh" refers to tone. If I insult you, or try to make you feel bad, or inject vicious sarcasm, then I'm being harsh. You didn't talk about anything along those lines, but you did seem to be disputing my claims about the viability of the OP, so I took it to be a defense of having this new discord server. If you're not talking on either of those issues then I don't know what your point is.

Comment author: adamaero  (EA Profile) 12 September 2018 12:55:09AM -1 points [-]

They were examples to how I saw how your post as "harsher than necessary". You've diluted these mere examples into a frivolous debate. If you believe you were not harsh at all, then believe what you want to believe.

Comment author: kbog  (EA Profile) 12 September 2018 04:13:46AM *  -2 points [-]

As I stated already, "harsh" is a question of tone, and you clearly weren't talking about my tone. So I have no clue what your position is or what you were trying to accomplish by providing your examples. There's nothing I can do in the absence of clarification.

Comment author: adamaero  (EA Profile) 12 September 2018 07:50:18PM *  -1 points [-]

Diction and pronouns have tone (e.g., "you're reinforcing" vs a more modest "that could reinforce"). With that, expressing certainty, about predictions (e.g., "whenever a group of people") is another way I saw the original comment as harsh--unless you're an expert in the field (and a relevant study would help too). I, for one, am no anthropologist nor sociologist.


I'm not debating if here. You asked how, and I quoted the statements I saw as the most harsh + most questionable. [I'm trying to say this lightly. Instead I could have made that last bit, " + furthest from the truth". But I didn't, because I'm trying to demonstrate. (And that's not what I really mean anyway.)] I never said you are wrong about _ _ _ _ _. I said, it may not be true; it may be true.

You seem to still think the original comment was not harsher than necessary by your own definition of tone. Either way, I'm guessing Mrs. Wise gave you much less confusing pointers with her PM.

Comment author: Dunja 10 September 2018 09:04:30AM *  0 points [-]

Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren't following your lines of reasoning (unfortunately, of course). For instance, the community practices related to far-future focus (in particular AI-risks) have adopted the assessment of scientific research and the funding thereof, which I find lacking scientific rigor, transparency and overall validity (to the point that it makes no sense to speak of "effective" charity). Moreover, there is a large consensus about such evaluative practices: they are assumed as valid by OpenPhil and the EAF, and even when I tried to exchange arguments with both of these institutions, nothing has ever changed (I've never even managed to push them into a public dialogue on this topic). I see this problem as a potential danger for the EA community in whole (just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such; similarly for newcomers). In view of this, I think dividing these practices would be a great idea. The fact they are connected to "far-future EA" is secondary to me, and it is unfortunate that far-future ideas turned into a bubble of its own, closed towards criticism questioning the core of their EA methodology.

That said, I agree with some of your worries (see my other comment here).

Comment author: kbog  (EA Profile) 10 September 2018 09:53:47AM *  1 point [-]

Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren't following your lines of reasoning (unfortunately, of course).

Well the main point of my comment is that people should not reinforce wrong practices by institutionalizing them.

For instance, the community practices related to far-future focus (in particular AI-risks) has been embedded in the assessment of scientific research and the funding thereof,

What is it when money goes to Givewell or Animal Charity Evaluators? Funding scientific research. Don't poverty interventions need research? Animal advocacy campaigns? Plant-based meat? Is it only the futurists who are doing everything wrong when numerous complaints have been lodged at the research quality of Givewell and ACE?

which I find lacking scientific rigor, transparency and overall validity

Well I haven't claimed that the evaluation of futurist scientific research is rigorous, transparent or valid. I think you should make a compelling argument for that in a serious post. Telling us that you failed to persuade groups such as Open Phil and the EAF doesn't exactly show us that you are right.

Note: it's particularly instructive here, as we evaluate the utility of the sort of segregation proposed by the OP, how the idea that EA ought to be split along these lines is bundled with the assertion that the Other Side is doing things "wrong"; we can see that the nominally innocuous proposal for categorization is operationalized to effect the general discrediting of those with an opposing point of view, which is exactly why it is a bad thing.

just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such

Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.

closed towards criticism questioning the core of their EA methodology

Where is this criticism? Where are the arguments on cause prioritization? Where is the review of the relevant academic literature? Where is the quantitative modeling? I see people complain that their "criticisms" aren't being met, but when I look for these criticisms, the search for the original source bottoms out either in sparse lines of assertions in web comments, or quite old arguments that have already been accepted and answered, and in either case opponents are clearly ready and willing to engage with such criticism. The claim that people are "closed towards criticism" invariably turns out to be nothing but the fact that the complainant failed to change anyone's mind, but seldom does the complainant question whether they are right at all.

Comment author: Dunja 10 September 2018 10:14:26AM *  1 point [-]

wow, you really seem annoyed... didn't expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.

I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don't have the time. I plan on writing a post concerning EAF's funding policy as well, where I'll sum it up in a similar way as I did for OpenPhil.

That said, I don't think we shouldn't criticize the research done by near-future organizations, to the contrary. And I completely agree: it'd be great to have a forum devoted only to research practices and funding thereof. But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.

Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.

Err, no. Funding by academic institutions follows a whole set of criteria (take the ERC scheme, for instance), which can of course be discussed on their own, but they aim at efficient and effective research. The funding of AI-risk related projects follows... well, nobody could ever specify to me any criteria to begin with, except "an anonymous reviewer whom we trust likes the project" or "they seem to have many great publications", which once looked at don't really exist. That's as far from academic procedures as it gets.

Comment author: kbog  (EA Profile) 10 September 2018 11:38:15AM *  1 point [-]

I assumed your post to be more of a nominal attempt to disagree with me than it really was, so the failure of some of its statements to constitute specific rebuttals of my points became irritating. I've edited my comment to be cleaner. I apologize for that.

I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.

Okay, and if we look at that post, we see some pretty complete and civil responses to your arguments. Seems like things are Working As Intended. I am responding some of your claims in that thread so that it gets collected in the right place. But going back to the conversation here, you seem to be pretty clear that it is possible to have effective and efficient science funding, even if Open Phil isn't doing it right. Plus, you're only referring to Open Phil/EAF, not everyone else who supports long term causes. So clearly it would be inappropriate for long term EA causes to be separated.

But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.

We can push for political change at the national or international level, we can grow the EA movement, or do animal advocacy. Those are known and viable far-future cause areas, even if they don't get as much attention under that guise.

Comment author: Dunja 10 September 2018 11:55:17AM 0 points [-]

No worries! Thanks for that, and yes, I agree pretty much with everything you say here. As for the discussion on far-future funding, it did start in the comments on my post, but it led nowhere near practical changes, in terms of transparency of proposed criteria used for the assessment of funded projects. I'll try to write a separate, more general post on that.

My only point was that due to the high presence of "far-future bias" on this forum (I might be wrong, but much of downvoting-without-commenting seems to be at least a tendency towards biased outlooks) it's nice to have some chats on more near-future related topics and strategies for promoting those goals. I see a chat channel more as a complementary venue to this forum than as an alternative.

Comment author: kbog  (EA Profile) 10 September 2018 12:20:31PM *  0 points [-]

It's extremely hard to identify bias without proper measurement/quantification, because you need to separate it from actual differences in the strength of people's arguments, as well as legitimate expression of a majority point of view, and your own bias. In any case, you are not going to get downvoted for talking about how to reduce poverty. I'm not sure what you're really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.

Comment author: Dunja 10 September 2018 01:14:24PM *  0 points [-]

First, I disagree with your imperatives concerning what one should do before engaging in criticism. That's a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues. I am genuinely interested in reading about near-future improvement topics, while being genuinely interested in voicing opinion on all kinds of meta issues, especially those that are closely related to my own research topics.

Second, the fact that measuring bias is difficult doesn't mean bias doesn't exist.

Third, to use your phrase, I am not sure what you are really worried about: having different types of venues for discussion doesn't seem harmful especially if they concern different focus groups.

Comment author: kbog  (EA Profile) 10 September 2018 08:05:03PM *  0 points [-]

That's a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues

Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument.

having different types of venues for discussion doesn't seem harmful especially if they concern different focus groups.

different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.

Comment author: Dunja 10 September 2018 08:31:04PM 0 points [-]

Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument.

Could you please explain what you are talking about here since I don't see how this is related to what you quote me saying above? Of course, there is a difference between a speculation and argument, and arguments may still include a claim that's expressed in a modal way. So I don't really understand how is this challenging what I have said :-/

different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.

having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?

Comment author: Dunja 10 September 2018 01:38:29PM *  -1 points [-]

I have to single out this one quote from you, because I have no idea where you are getting all this fuel from:

But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.

Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics? I have a hard time understanding why you are so confrontational. Your last sentence:

Try things out before judging.

is the highest peak of unfriendliness. What should I try exactly before judging?!

Comment author: kbog  (EA Profile) 10 September 2018 07:58:28PM *  0 points [-]

I don't know of any less confrontational/unfriendly way of wording those points. That comment is perfectly civil.

Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics?

It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X.

What should I try exactly before judging?!

Look, it's right there in the original comment - "talking about near-future related topics and strategies". I don't know how else I can say this.

Comment author: Dunja 10 September 2018 08:21:59PM *  0 points [-]

Civil can still be unfriendly, but hey, if you aren't getting it, it's fine.

It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X.

If it was clear, why would I ask? there's your lack of friendliness in action. And I still don't see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven't personally engaged in discussing it in that context. The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim. So far, you are just stating something which I currently can't make any sense of.

"talking about near-future related topics and strategies". I don't know how else I can say this.

Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic? As I said above, this is a non-sequitur, or at least you haven't provided any arguments to support this thesis. I can be in a position to suggest that scientists may profit from exchanging their ideas in a venue A even if I myself haven't exchanged any ideas in A.