6

kbog comments on Near-Term Effective Altruism Discord - Effective Altruism Forum

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: kbog  (EA Profile) 10 September 2018 09:53:47AM *  1 point [-]

Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren't following your lines of reasoning (unfortunately, of course).

Well the main point of my comment is that people should not reinforce wrong practices by institutionalizing them.

For instance, the community practices related to far-future focus (in particular AI-risks) has been embedded in the assessment of scientific research and the funding thereof,

What is it when money goes to Givewell or Animal Charity Evaluators? Funding scientific research. Don't poverty interventions need research? Animal advocacy campaigns? Plant-based meat? Is it only the futurists who are doing everything wrong when numerous complaints have been lodged at the research quality of Givewell and ACE?

which I find lacking scientific rigor, transparency and overall validity

Well I haven't claimed that the evaluation of futurist scientific research is rigorous, transparent or valid. I think you should make a compelling argument for that in a serious post. Telling us that you failed to persuade groups such as Open Phil and the EAF doesn't exactly show us that you are right.

Note: it's particularly instructive here, as we evaluate the utility of the sort of segregation proposed by the OP, how the idea that EA ought to be split along these lines is bundled with the assertion that the Other Side is doing things "wrong"; we can see that the nominally innocuous proposal for categorization is operationalized to effect the general discrediting of those with an opposing point of view, which is exactly why it is a bad thing.

just think of the press getting their hands on this problem and arguing that EAs finance scientific research which is assumed effective, where it is unclear according to which criteria it would count as such

Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.

closed towards criticism questioning the core of their EA methodology

Where is this criticism? Where are the arguments on cause prioritization? Where is the review of the relevant academic literature? Where is the quantitative modeling? I see people complain that their "criticisms" aren't being met, but when I look for these criticisms, the search for the original source bottoms out either in sparse lines of assertions in web comments, or quite old arguments that have already been accepted and answered, and in either case opponents are clearly ready and willing to engage with such criticism. The claim that people are "closed towards criticism" invariably turns out to be nothing but the fact that the complainant failed to change anyone's mind, but seldom does the complainant question whether they are right at all.

Comment author: Dunja 10 September 2018 10:14:26AM *  1 point [-]

wow, you really seem annoyed... didn't expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.

I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don't have the time. I plan on writing a post concerning EAF's funding policy as well, where I'll sum it up in a similar way as I did for OpenPhil.

That said, I don't think we shouldn't criticize the research done by near-future organizations, to the contrary. And I completely agree: it'd be great to have a forum devoted only to research practices and funding thereof. But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.

Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on.

Err, no. Funding by academic institutions follows a whole set of criteria (take the ERC scheme, for instance), which can of course be discussed on their own, but they aim at efficient and effective research. The funding of AI-risk related projects follows... well, nobody could ever specify to me any criteria to begin with, except "an anonymous reviewer whom we trust likes the project" or "they seem to have many great publications", which once looked at don't really exist. That's as far from academic procedures as it gets.

Comment author: kbog  (EA Profile) 10 September 2018 11:38:15AM *  1 point [-]

I assumed your post to be more of a nominal attempt to disagree with me than it really was, so the failure of some of its statements to constitute specific rebuttals of my points became irritating. I've edited my comment to be cleaner. I apologize for that.

I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.

Okay, and if we look at that post, we see some pretty complete and civil responses to your arguments. Seems like things are Working As Intended. I am responding some of your claims in that thread so that it gets collected in the right place. But going back to the conversation here, you seem to be pretty clear that it is possible to have effective and efficient science funding, even if Open Phil isn't doing it right. Plus, you're only referring to Open Phil/EAF, not everyone else who supports long term causes. So clearly it would be inappropriate for long term EA causes to be separated.

But concerning far-future funding, research is the only thing that can be funded, which makes it particularly troublesome.

We can push for political change at the national or international level, we can grow the EA movement, or do animal advocacy. Those are known and viable far-future cause areas, even if they don't get as much attention under that guise.

Comment author: Dunja 10 September 2018 11:55:17AM 0 points [-]

No worries! Thanks for that, and yes, I agree pretty much with everything you say here. As for the discussion on far-future funding, it did start in the comments on my post, but it led nowhere near practical changes, in terms of transparency of proposed criteria used for the assessment of funded projects. I'll try to write a separate, more general post on that.

My only point was that due to the high presence of "far-future bias" on this forum (I might be wrong, but much of downvoting-without-commenting seems to be at least a tendency towards biased outlooks) it's nice to have some chats on more near-future related topics and strategies for promoting those goals. I see a chat channel more as a complementary venue to this forum than as an alternative.

Comment author: kbog  (EA Profile) 10 September 2018 12:20:31PM *  0 points [-]

It's extremely hard to identify bias without proper measurement/quantification, because you need to separate it from actual differences in the strength of people's arguments, as well as legitimate expression of a majority point of view, and your own bias. In any case, you are not going to get downvoted for talking about how to reduce poverty. I'm not sure what you're really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.

Comment author: Dunja 10 September 2018 01:14:24PM *  0 points [-]

First, I disagree with your imperatives concerning what one should do before engaging in criticism. That's a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues. I am genuinely interested in reading about near-future improvement topics, while being genuinely interested in voicing opinion on all kinds of meta issues, especially those that are closely related to my own research topics.

Second, the fact that measuring bias is difficult doesn't mean bias doesn't exist.

Third, to use your phrase, I am not sure what you are really worried about: having different types of venues for discussion doesn't seem harmful especially if they concern different focus groups.

Comment author: kbog  (EA Profile) 10 September 2018 08:05:03PM *  0 points [-]

That's a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues

Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument.

having different types of venues for discussion doesn't seem harmful especially if they concern different focus groups.

different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.

Comment author: Dunja 10 September 2018 08:31:04PM 0 points [-]

Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument.

Could you please explain what you are talking about here since I don't see how this is related to what you quote me saying above? Of course, there is a difference between a speculation and argument, and arguments may still include a claim that's expressed in a modal way. So I don't really understand how is this challenging what I have said :-/

different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.

having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?

Comment author: kbog  (EA Profile) 10 September 2018 08:37:37PM *  0 points [-]

Could you please explain what you are talking about here since I don't see how this is related to what you quote me saying above?

The part where I say "it's POSSIBLE to talk about it" relates to your claim "we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues", and the part where I say "bias MAY exist" relates to your claim "the fact that measuring bias is difficult doesn't mean bias doesn't exist."

having a discussion focusing on certain projects rather than others (in view of my suggestion directly to the OP) allows for such a legitimate focus, why not?

Your suggestion to the OP to only host conversation about "[projects that] improve the near future" is the same distinction of near-term vs long-term, and therefore is still the wrong way to carve up the issues, for the same reasons I gave earlier.

Comment author: Dunja 10 September 2018 08:51:01PM 0 points [-]

right, we are able to - doesn't mean we cannot form arguments. since when did arguments exist only if we can be absolutely certain about something?

as for my suggestion, unfortunately, and as i've said above, there is a bubble in the EA community concerning the far-future prioritization, which may be overshadowing and repulsive towards some who are interested in other topics. in the ideal context of rational discussion, your points would hold completely. but we are talking here about a very specific context where a number of biases are already entrenched and people tend to be put off by that. your approach alone in this discussion with me is super off-putting and my best guess is that you are behaving like this because you are hiding behind your anonymous identity. i wonder if we talked in person, if you'd be so rude (for examples, see my previous replies to you). i doubt.

Comment author: Dunja 10 September 2018 01:38:29PM *  -1 points [-]

I have to single out this one quote from you, because I have no idea where you are getting all this fuel from:

But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.

Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics? I have a hard time understanding why you are so confrontational. Your last sentence:

Try things out before judging.

is the highest peak of unfriendliness. What should I try exactly before judging?!

Comment author: kbog  (EA Profile) 10 September 2018 07:58:28PM *  0 points [-]

I don't know of any less confrontational/unfriendly way of wording those points. That comment is perfectly civil.

Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics?

It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X.

What should I try exactly before judging?!

Look, it's right there in the original comment - "talking about near-future related topics and strategies". I don't know how else I can say this.

Comment author: Dunja 10 September 2018 08:21:59PM *  0 points [-]

Civil can still be unfriendly, but hey, if you aren't getting it, it's fine.

It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X.

If it was clear, why would I ask? there's your lack of friendliness in action. And I still don't see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven't personally engaged in discussing it in that context. The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim. So far, you are just stating something which I currently can't make any sense of.

"talking about near-future related topics and strategies". I don't know how else I can say this.

Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic? As I said above, this is a non-sequitur, or at least you haven't provided any arguments to support this thesis. I can be in a position to suggest that scientists may profit from exchanging their ideas in a venue A even if I myself haven't exchanged any ideas in A.

Comment author: kbog  (EA Profile) 10 September 2018 08:32:15PM 0 points [-]

And I still don't see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven't personally engaged in discussing it in that context

Yes, you can, technically, in theory. I'm recommending that you personally engage before judging it with confidence.

The burden of proof is on you: if you want to make an argument, you have to provide more than just a claim.

This kind of burden-of-proof-shifting is not a good way to approach conversation. I've already made my argument.

So far, you are just stating something which I currently can't make any sense of.

What part of it doesn't make sense? I honestly don't see how it's not clear, so I don't know how to make it clearer.

Again: why would someone be able to assess the viability of the context in which a certain topic is discussed only if they have engaged in the discussion of that topic

They can, I'm just saying that it will be pretty unreliable.

Comment author: Dunja 10 September 2018 08:38:48PM *  0 points [-]

I'm recommending that you personally engage before judging it with confidence.

But why would I? I might be fond of reading about certain causes from those who are more knowledgeable about them than I am. My donation strategies may profit from reading such discussions. And yet I may engage there where my expertise lies. This is why i really can't make sense of your recommendation (which was originally an imperative, in fact).

This kind of burden-of-proof-shifting is not a good way to approach conversation. I've already made my argument.

I haven't seen any such argument :-/

What part of it doesn't make sense? I honestly don't see how it's not clear, so I don't know how to make it clearer.

See above.