Comment author: Greg_Colbourn 20 June 2018 10:42:55PM 3 points [-]

Remote supervision of research is a possibility, but depends on people with relevant knowledge and experience being available. Peer support from other guests will be available to some extent, especially given the preponderance of people in the movement with postgraduate degrees. However, plenty of research can be self-directed, especially things that are more a case of collating existing knowledge than developing new science (e.g. meta-analyses, review articles, writing books). And the hotel will probably appeal to autodidacts who can plow through published texts and then build on top of them (without much need for explanation additional to what they find in writing).

The hotel is open to hosting research groups, and also conferences and workshops.

Comment author: Dunja 21 June 2018 07:45:59AM *  0 points [-]

That's all fine, but how does one make sure their meta-analysis follows the adequate methodological rigor of the given domain unless they have a prior experience with research or an in-depth knowledge of such methods? Writing review articles may be easier, though writing books that will make an impact is yet again hard without already having research experience. I've noticed in the area of EA that for some reason there is a misconception concerning the research in humanities: everyone would agree that conducting natural sciences outside of an appropriate expert team or a research institution is almost impossible. Yet, people tend to assume that humanities are different. They are not. When it comes to using reliable methods, engaging with the relevant literature, making studies (even meta-analysis) that actually matter - all this is far from trivial and requires expertise. It's extremely hard to weave through tons of information and uncover the one that actually matters, that should be reviewed, and then pushing that towards a finding that will actually make an impact. So when you say that autodidacts can plow through published texts and build on top of them- that's not at all simple without having had experience with such research beforehand and knowing quite well: - how to sort through the given texts and order them according to relevance - how to assess the given texts (depending on the domain, one might need to acquire additional skills for this) - what exactly to write about to "build on top of that": which standards should be employed in the given field so that they lead to an actual publication etc. So I'd say: regular supervision (even if remote) for any pre-doc is extremely important. As well as having funds to attend the relevant events in the field where one can get feedback on their work.

Comment author: Greg_Colbourn 18 June 2018 11:51:27PM 0 points [-]

There is an analogy with speculative investing here I think - for something to be widely regarded as worthwhile investing in (i.e. research funded by mainstream academia) it has to already have evidence of success (e.g. Bitcoin now). By which point it is no longer new and highly promising in terms of expected value (like Bitcoin was in, say, 2011) i.e. it is necesssarily the case that all things very high in (relative) expected value are outside the mainstream.

AGI alignment is gaining more credibility, but it still doesn't seem like it's that accepted in mainstream academia.

Anyway, I think we are probably on a bit of a tangent to what AISC is trying to achieve - namely help new researchers level up (/get a foot in the door in academic research).

Comment author: Dunja 19 June 2018 12:09:17AM *  0 points [-]

Oh I agree that for many ideas to be attractive, they have to gain a promising character. I wouldn't reduce the measure of pursuit worthiness of scientific hypotheses to the evidence of their success though: this measure is rather a matter of prospective values, which have to do with a feasible methodology (how many research paths we have despite current problems and anomalies?). But indeed, sometimes research may proceed simply as tapping in the dark, in spite all the good methodological proposals (as e.g. it might have been the case in the research on protein synthesis in the mid 20th c.).

However, my point was simply the question: does such an investment in future proposals outweigh the investment in other topics, so that it should be funded from an EA budget rather than from existing public funds? Again: I very much encourage such camps. Just not on the account of spending the cash meant for effectively reducing suffering (due to these projects being highly risky and due to the fact that they are already heavily funded by say OpPhil).

Comment author: Greg_Colbourn 18 June 2018 11:09:49PM 0 points [-]

For more on the thinking behind streamlined non-mainstream funding, see

I don't think academia is yet on the same page as EA with regard to AI Safety, but may well be soon hopefully (with credibility coming from the likes of Stuart Russell and Max Tegmark).

Comment author: Dunja 18 June 2018 11:14:40PM *  0 points [-]

But this is not about whether academia is on the same page or not; it's about the importance of pushing the results via academic channels because otherwise they won't be recognized by anyone (policy makers especially). Moreover, what I mention above are funding institutions offering the finances of individual projects - assessed in terms of their significance and feasibility. If there is a decent methodology to address the given objectives, even if the issue is controversial, this doesn't mean the project won't be financed. Alternatively, if you actually know of decent project applications that have been rejected, well let's see those and examine whether there is indeed a bias in the field. Finally, why do you think that academia is averse towards risky projects?! Take for instance ERC schemes: they are intentionally designed for high-risk/high-gain project proposals, that are transformative and groundbreaking in character.

Comment author: Dunja 18 June 2018 10:51:36PM 2 points [-]

This is great, thanks a lot for doing this work!

Comment author: Dunja 18 June 2018 10:21:37PM *  2 points [-]

Congrats on such a creative idea and the commitment in wanting to realize it! :) My main worry concerns a very basic premise that seems to underlie the project: that providing an optimal space for individuals to do research is likely to result in efficient and effective research. While conducting online courses may indeed be useful, conducting unguided research is not only hard, but unlikely to lead anywhere concerning effectiveness and efficiency. A junior researcher, without an access to a supervisor who has in-depth knowledge of the given subject domain, is likely to end up tapping in the dark and trying out all kinds of paths that are far from being optimal. This is why the task of a supervisor is so important: one learns which topics to focus on, which gaps in the knowledge should first be filled in, how this should be done, which blind spots are hindering one's research, etc. And that only concerns knowledge acquisition.

Knowledge production is probably even harder: without having an access to guidance concerning how to conduct e.g. empirical research, how to write academic papers, which workshops and conferences are optimal places for receiving critical feedback, which journal would be good for the given paper, etc. - one's own output is likely to remain unknown, unrecognized by the relevant community (academic or EA-related) and hence entirely ineffective.

I am not sure which steps could be taken to tackle these problems. The only solution I currently see is opening the hotel for larger project applications, by experts willing to coordinate the research done in the center, and who would regularly visit the place to guide junior researchers.

Comment author: remmelt  (EA Profile) 17 June 2018 12:10:44PM *  1 point [-]

If it would cost the same or less time to get funding via public grants and institutions, I would definitely agree (i.e. in filling in an application form, in the average number of applications that need to be submitted before the budget is covered, and in loss of time because of distractions and 'meddling' by unaligned funders).

Personally, I don't think this applies to AI Safety Camp at all though (i.e. my guess is that it would cost significantly more time than getting money from 'EA donors', which we would be better off spending on improving the camps) except perhaps in isolated cases that I have not found out about yet.

I'm also not going to spend the time to write up my thoughts in detail but here's a summary:

  • AI alignment is complicated – there's a big inferential gap in explaining to public grantmakers why this is worth funding (as well as difficulty making the case for how this is going to make them look good)
  • The AI Safety Camp is not a project of an academic institution, which gives us little credibility to other academic institutions who would be most capable of understanding the research we are building on
  • Tens of millions of dollars are being earmarked to AI alignment research by people in the EA community right now who are looking to spend that on promising projects run by reliable people. There seems to be a consensus that we need to work at finding talent to spent the money on (not more outside funders).
Comment author: Dunja 17 June 2018 05:41:14PM *  1 point [-]

I very much understand your hope concerning the AI talent and the promising value of this camp. However, I'd also like to see the objective assessment of effectiveness (as in effective altruism) concerning such research attempts. To do so, you would have to show that such research has a comparatively higher chance of producing something outstanding than the existing academic research. Of course, that needs to be done in view of empirical evidence, which I very much hope you can provide. Otherwise, I don't know what sense of "effective" is still present in the meaning of "effective altruism".

Again: I think these kinds of research camps are great as such, i.e. in view of overall epistemic values. They are as valuable as, say, a logic camp, or a camp in agent-based models. However, I would never argue that a camp in agent-based models should be financed by EA funds unless I have empirically grounded reasons that such a research can contribute to effective charity and prevention of possible dangers better than the existing academic research can.

As for the talent search, you seem to assume that academic institutions cannot uncover such talents. I don't know where you get this evidence from, but PhD grants across EU, for instance, are precisely geared towards such talents. Why would talented individuals not apply for those? And where do you get the idea that the topic of AI safety won't be funded by, say, Belgian FWO or German DFG? Again, you would need to provide empirical reasons that such systematic bias against projects on these topics exists.

Finally, if the EA community wants to fund reliable project initiators for the topic of AI safety, why not make an open call for experts in the field to apply with project proposals and form the teams who can immediately execute these projects within the existing academic institutions? Where is this fear of academia coming from? Why would a camp like this be more streamlined than an expert proposal, where a PI of the given project employs the junior researchers and systematically guides them in the given research? In all other aspects of EA this is precisely how we wish to proceed (think of medical research).

Comment author: remmelt  (EA Profile) 15 June 2018 04:58:37PM *  0 points [-]

I’ll answer this point since I happen to know.

  • Left-over funds from the previous camp were passed on
  • Greg Colbourn is willing to make a donation
  • The funding team just submitted an application for EA Grant’s second round

The team does have plenty of back-up options for funding so I personally don’t expect financial difficulties (though it would be less than ideal I think if the Czech Association for Effective Altruism has to cover a budget deficit itself).

Comment author: Dunja 15 June 2018 06:39:13PM *  0 points [-]

Thanks for the reply! It's great if the funding comes from institutions or individuals who are willing to support research topics. I think it would be really bad though if the funding was taken from any standard EA donations without previously attempting to get the funding via existing public grants and institutions (and even in that case, it would still be bad given the comparative impact of such a camp and alternative ways of providing effective charity. I am all for research, but primarily via non-EA research funds which are numerous for topics such as this one - i.e. we should strive to fund EA research topics from general research related funds as much as possible).

Comment author: Dunja 14 June 2018 01:38:03PM *  1 point [-]

Thanks for this info, Anne! Could you just clarify who the sponsors of the camp are? I am asking because the attendance is free, but somehow I haven't found any info on who is paying for the whole event. Just out of curiosity :)

Comment author: Denkenberger 05 May 2018 12:29:06PM *  0 points [-]

First let me clarify that I only support people voluntarily taking sweatshop jobs-I do not support anything involuntary. I think it is good to consider the long-term implications of present actions. But by taking a sweatshop job, not only can people afford life-saving interventions, but they can also afford things like elementary education, which has massive long-term benefits. Saying something is better does not imply that it is good. Starting with sweatshop jobs, the four Asian Tigers have made a dramatic rise, where according to purchasing power paritypercapita) South Korea and Taiwan are richer than Spain, and Singapore and Hong Kong are richer than the US!

Comment author: Dunja 05 May 2018 05:27:44PM 0 points [-]

Sure, but can we really speak of "choice" for those who have no other options? Again: your argument can be used to defend any form of slavery, as long as slaves became slaves "out of choice". If otherwise they wouldn't have survived, what kind of choice is that?! Imagine the alternative: there is consumers-driven pressure on companies to introduce serious control of working conditions. As a result, current sweatshops eventually become much safer for work. It's a long-term win-win scenario.

Comment author: Denkenberger 02 May 2018 10:21:36PM 0 points [-]

Hmm... I think that providing sweatshop jobs has positive economic and social long-term consequences, because it brings people out of extreme poverty. I think the main drawback is the non-utilitarian criticism of sweatshops as "exploiting" people. Most people do not recognize that sweatshops are orders of magnitude safer than living in extreme poverty where something like 20% of your children die. But even if people were aware of that, they could still say that since sweatshops do not have the same safety standards is the developed country factories, it is somehow unfair to those workers - they are not getting justice.

Comment author: Dunja 03 May 2018 08:46:59PM *  0 points [-]

The problem with this kind of reasoning is that in the same way one could attempt to justify slavery: just because sweatshops/slavery provide living conditions better than none, it doesn't mean we shouldn't strive to abolish them. Hence, by boycotting sweatshops one gives an important message to corporations that use them. As a result, under sufficient pressure, companies will change their rules and take care to provide better working conditions. The goal here is long-term structural improvement of social/economic practices rather than short-term help.

View more: Next