Comment author: Ben_Todd 08 August 2018 06:04:37AM 4 points [-]

I agree that would be ideal, but it doesn't seem like a high priority feature. The risk-free 1yr interest rate is about 2% at the minute (in treasuries), so even if the money is delayed for a whole year, we're only talking about a gain of 2%, and probably more like 1% after transaction costs.

You could invest in the stock market instead, but the expected return is still probably only 1-5% per year (as I argue here: https://80000hours.org/2015/10/common-investing-mistakes-in-the-effective-altruism-community/). Plus, then you have a major risk of losing lots of the money, which will probably be pretty hard to explain to many of the users, the press etc.

I expect the staff time spent adding and managing this feature could yield much more than a couple of percent growth to the impact of the funds in many other ways (e.g. the features Marek lists above).

Comment author: RyanCarey 08 August 2018 06:30:30AM 3 points [-]

Agreed. You could get a higher effective ROI by mission-hedging -- investing AI-risk funds in things like Google. But even then, the returns seem like a pretty second-order issue.

Comment author: RyanCarey 20 July 2018 10:48:45AM 0 points [-]

Is this likely to occur again in 2018?

Comment author: RyanCarey 19 July 2018 11:40:40PM *  10 points [-]

This seems like a strong plan, and I'm glad you've thought things through thoroughly. I'll just outline points of agreement, and slight differences.

I certainly agree with the approach of building EA Forum 2 from LessWrong 2. After all, the current EA Forum version was built from LessWrong 1 for similar reasons. We had a designer sketch the restyled site, and this was quite a positive experience, so I'd recommend doing the same with the successor. Basically, the EA Forum turned out quite a bit more beautiful than the LessWrong, and the same should be possible again. I think there are some easy wins to be had here, like making the LW2's front-page text a bit darker, but I also think it's possible to go beyond that and make things really pretty all-around.

I agree with keeping LW2's new Karma system, and method of ordering posts, and I think that this is a major perk of the codebase. I'm also happy to see that you seem to have the downsides well-covered.

One small diff is when you say "Although CEA has a view on which causes we should prioritize, we recognize that the EA Forum is a community space that should reflect the community." Personally, I think that forum administrators should be able to shape the content of the forum a little bit. Not by carrying out biased moderation, but by various measures that are considered "fair" like producing content, promoting content, voting, and so on.

I think the possible features are kind-of interesting. My thoughts are as follows:

  • different landing pages: may be good
  • local group pages: may be good, but maybe events are best left on Facebook. Would be amazing if you can automatically include Facebook events, but I've no idea whether that's feasible.
  • additional subforums: probably bad, because I think the community is currently only large enough to support ~2 active fora, and having multiple fora adds confusion and reduces ease-of-use.
  • Single sign-on: likely to be good, since things are being consolidated to one domain.

Thanks again to Trike Apps for running the forum over all these years, and thanks to CEA for taking over. With my limited time, it would never have been possible to transition the forum over to new software, and so we would have been in a much worse position. So thanks all!

In response to Open Thread #40
Comment author: RandomEA 15 July 2018 10:09:52AM 5 points [-]

Frequency of Open Threads

What do people think would be the optimal frequency for open threads? Monthly? Quarterly? Semi-annually?

In response to comment by RandomEA on Open Thread #40
Comment author: RyanCarey 16 July 2018 06:53:29AM *  1 point [-]

Every 2-3 months seems good.

Comment author: Brendon_Wong 10 July 2018 04:50:47PM *  1 point [-]

Hi Ryan, thanks for sharing information and feedback! I completely agree, practically speaking, spending a long time building something without market feedback/validation is not a good idea, so using an existing way to process applications and operating under an established organization would be a great to way to get started effectively.

I am curious if you have any feedback on the fused proposal that I had in mind, and how to potentially improve the design in order to protect against the possibility of funding low-quality or harmful projects. I was imagining that since there is a discussion section for each proposal, anyone could mention potential problems that could arise from funding a proposal, and donors could check this section for feedback before contributing. Perhaps the benefits from this openness do not exceed the potential harm but it's difficult for me to assess this.

Comment author: RyanCarey 11 July 2018 11:07:25AM 0 points [-]

The concept starts with a website that has a fully digital grant application process. Applicants create user accounts that let them edit applications, and applicants can choose from a variety of options like having the grant be hidden or publicly displayed on the website, and posting under their real names or a pseudonym. Grants have discussions sections for the public to give feedback. Anonymous project submission help people get feedback without reputation risk and judge project funding potential before committing significant time and resources to a project. If the applicant opts to make an application public, it is displayed for everyone to see and comment on. Anyone can contact the project creator, have a public or private discussion on the grant website, and even fund a project directly.

What does this achieve that Google Docs linked from the EA Forum can't achieve? I think it should start with a more modest MVP that works within existing institutions and more extensively leverages existing software products.

The website is backed by a centralized organization that decides which proposals to fund via distributed grantmaking. Several part-time or full-time team members run the organization and assess the quality and performance of grantmakers. EAs in different cause areas can apply to be grantmakers. After an initial evaluation process, beginner grantmakers are given a role like “grant advisor” and given a small grantmaking budget. As grantmakers prove themselves effective, they are given higher roles and a larger grantmaking budget.

This sounds good.

While powered by dencentralized grantmakers, the organization has centralized funding options for donors that do not want to evaluate grants themselves.

I'm not sure what you mean by "centralized funding options"

Donations can be tax-deductible, non-tax-deductible, or even structured as impact investments into EA initiatives. Donors can choose cause areas to fund, and can perhaps even fund individual grantmakers.

This sounds good.

Comment author: RyanCarey 10 July 2018 11:29:56AM 5 points [-]

Nice post, Brendon!

I've been of the view for the last couple of years that it'd be useful to have more dedicated effort put toward funding EA projects.

I have a factual contributions that should help to flesh out your strategic picture here:

  1. BERI, in addition to EA Grants are funding some small-scale projects. In the first instance, one might want to bootstrap a project like this through BERI, given that they already have some funding available and are a major innovator in the EA support space right now.
  2. OpenPhil does already do some regranting.
  3. EA Ventures attempted, over the course of some months, to do this a few years ago, which you can read at least a bit about here: http://effective-altruism.com/ea/fo/announcing_effective_altruism_ventures/. I think it failed for a range of reasons including inadequate projects, but it would be worth looking into this further.

Notwithstanding these factors, I still think this idea is worth exploring. As you suggest, I might start off by creating a grant application system. But I think the most important aspects are probably not the system itself as the quality of evaluators and the volume of funders. So it might be best to try to bootstrap it from an existing organization or funder, and to initially accept applications via a low-tech system, such as Google Doc proposals. I'd also emphasise that one good aspect of the status quo is that bad ideas mostly go unfunded at present, especially ones whose low-quality could damage the reputation of EA and its associated research fields, or ones that could inspire hamrful activity. There are more potentially harmful projects within the EA world than in entrepreneurship in general, and so these projects might be overlooked from people taking an entrepreneurial or open-source stance, and this is worth guarding against.

One meta-remark is that I generally like the conversations that are prompted by shared Google Docs, and I think that this generates, on average, more extensive and fine-grained feedback than a Forum Post would typically receive. So if you put out a "nonprofit business plan" for this idea, then I figure a Google Doc (+/- links from the Forum and relevant Facebook groups) would be a great format. Moreover, I'd be happy to provide further feedback on this idea in the future.

Comment author: Peter_Hurford  (EA Profile) 19 June 2018 12:41:12AM 2 points [-]

We control the site, so we can revert the addition of any information hazards if they come up. I imagine the site has the same risk of spreading infohazards as, say, this forum.

Comment author: RyanCarey 26 June 2018 08:11:43AM *  1 point [-]

Do you have a plan for scanning over posted materials (analogously to moderation on the EA Forum), a code of conduct for posts, or a procedure for discreetly flagging hazardous content?

Comment author: RyanCarey 19 June 2018 12:20:36AM 5 points [-]

Do you have a plan for managing information hazards?

Comment author: Risto_Uuk 16 March 2018 11:03:43PM 2 points [-]

Sam Harris did ask Steven Pinker about AI safety. If anybody gets around listening to that, it starts at 1:34:30 and ends at 2:04, so that's about 30 minutes about risks from AI. Harris wasn't his best in that discussion and Pinker came off much more nuanced and evidence and reason based.

Comment author: RyanCarey 29 May 2018 05:32:25PM 1 point [-]

I agree with the characterization of the discussion, but regardless, you can find it here: https://www.youtube.com/watch?v=H_5N0N-61Tg&t=86m12s

Comment author: RyanCarey 02 May 2018 11:32:16PM 1 point [-]

I think this has turned out really well Max. I like that this project looks set to aid with movement growth while improving the movement's intellectual quality, because the content is high-quality and representative of current EA priorities. Maybe the latter is the larger benefit, and probably it will help everyone to feel more confident in accelerating the movement growth over time, and so I hope we can find more ways to have a similar effect!

View more: Next