Comment author: Flodorner 10 September 2018 08:59:08AM 5 points [-]

Are any ways of making content easier to filter (like for example tags) planned?

I am rather new to the community and there have been multiple occassions, where i randomly stumbled upon old articles, i haven't read, concerned with topics i was interested in and had previously made an effort to find articles about. This seems rather inefficient.

Comment author: MichaelDickens  (EA Profile) 11 September 2018 02:52:34AM 4 points [-]

Another feature that could help people find old posts is to display a few random old posts on a sidebar. For example, on any of Jeff Kaufman's blog posts, five old posts display on the sidebar. I've found lots of interesting old posts on Jeff's blog via this feature.

Comment author: SiebeRozendal 23 July 2018 12:42:07PM 1 point [-]

I just commented to SamDeere's comment above about having multiple types of votes. One indicating agreement and one indicating "helpfulness". Then you can sort by both, but the forum is sorted by default by "helpfulness". Do you think this would fix some of your issues with a voting system?

Comment author: MichaelDickens  (EA Profile) 24 July 2018 01:24:00AM *  0 points [-]

I think there's another downside there: we should be wary of implementing a system that doesn't have a track record. There are lots of forums that don't have voting, and reddit-style voting has a long track record as well (plus Hacker News-style, which is similar but not quite the same as reddit-style). As you start introducing extra complexity, you don't know what's going to happen. Most possible designs are bad, and most designs we come up with a priori will probably be bad, so my inclination would be to stick close to a system that has a proven track record.

That said, having multiple types of upvotes could look something like Facebook which now has multiple types of likes, and we have at least some idea of what that would look like. So it might be a good idea.

Comment author: MichaelDickens  (EA Profile) 23 July 2018 04:57:43AM 7 points [-]

I'm concerned with the plans to make voting/karma more significant; I would prefer to make them less significant than the status quo rather than more. Voting allows everyone's biases to influence discussion in bad ways. For example, people's votes tend to favor:

  1. things they agree with over things they disagree with, which makes it harder to voice dissenting opinions
  2. entertaining content over important but less-entertaining content
  3. agreeable content without much substance over niche or disagreeable content with lots of substance
  4. posts that raise easy questions and give strong answers over posts that raise hard questions and give weak answers

Sorting the front page by votes, and giving high-karma users more voting power, only does more to incentivize bad habits. I think the current voting system is more suited to something like reddit which is meant for entertainment, so it's reasonable for the most popular posts to appear first. If the idea is to have "all of EA’s top researchers posting and commenting regularly", I don't think votes should be such a strong driver of the UX.

About a year ago I essentially stopped making top-level posts on the EA Forum because the voting system bothers me too much, and the proposed change sounds even worse. Maybe I'm an outlier, but I'd prefer a system that more closely resembled a traditional forum without voting where all posts have equal status. That's probably not optimal and it has its own problems (the most obvious being that low-quality content doesn't get filtered out), but I'd prefer it to the current or proposed system.

Comment author: John_Maxwell_IV 05 April 2018 11:25:45PM 1 point [-]

looking into the best ways of not holding the balances in cash

A possible approach to this problem is to have a mixture of liquid and illiquid assets. Suppose an EA fund has $500K, with $100K in very liquid assets, $200K in moderately liquid assets, and $200K in fairly illiquid assets. Suppose the fund manager decides they want to give all $500K in the fund to a specific organization. In that case, they could give $100K to the organization immediately, which would hopefully tide them over until the $200K in moderately liquid assets became available, which would hopefully tide them over until the remaining $200K became available.

Comment author: MichaelDickens  (EA Profile) 19 April 2018 02:18:37AM 0 points [-]

Almost all typical assets--bonds, stocks, commodities--are highly liquid, in the sense that if you decide to sell them, you can convert them into cash in a few minutes max. So even a well diversified portfolio can still be liquid. The main exceptions are real estate and private equity, but I see no reason why EA Funds need to hold those.

Comment author: RyanCarey 13 February 2018 09:44:10AM *  3 points [-]

I donated to MIRI and GCRI.

Also, the link to Zvi's writeup seems to be missing?

Comment author: MichaelDickens  (EA Profile) 15 February 2018 03:14:23AM 1 point [-]

I don't know, the link to Zvi's writeup works for me. But here is the URL:

Comment author: MichaelDickens  (EA Profile) 12 December 2017 05:21:19PM 12 points [-]

I haven't yet gotten around to writing up where I plan on donating in 2018 (I already maxed out my 2017 donations in February), but I've been thinking along the same lines. Recently I've been leaning toward donating to these smaller, riskier organizations because I see a lot of value in helping new orgs grow and learning what they can accomplish--especially because the established charities that I like best have gotten a lot of funding recently and have room to scale up before they start to hit the limits of their funding.

Comment author: MichaelPlant 02 June 2017 12:13:34PM 7 points [-]

Thanks for this Kerry, very much appreciate the update.

Three funds I'd like to see:

  1. The 'life-improving' or 'quality of life'-type fund that tries to find the best way to increase the happiness of people whilst they are alive. My view on morality leads me to think that is what matters most. This is the area I do my research on too, so I'd be very enthusiastic to help whoever the fund manager was.

  2. A systemic change fund. Part of this would be reputational (i.e. no one could then complain EAs don't take systemic change seriously) another part would be that I'd really like to see what the fund manager would choose to give money too if it had to go to systemic change. I feel that would be a valuable learning experience.

  3. A 'moonshots' fund that supported high-risk, potentially high-reward projects. For reasons similar to 2 I think this would be a really useful way for us to learn.

My general thought is the more funds the better, presuming you can find qualified enough people to run them. It has the positive effect of demonstrating EA's openess and diversity, which should mollify our critics. As mentioned, it provides chances to learn stuff. And it strikes me as unlikely new funds would divert much money away from the current options. Suppose we had an EA environmentalism fund. I assume people who would donate to that wouldn't have been donating to, say, the health fund already. They'd probably be supporting green charities instead.

Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:46:47AM 4 points [-]

Now that you mention it, I think this would be a much more interesting way to divide up funds. I have basically no idea whether AI safety or anti-factory farming interventions are more important; but given the choice between a "safe, guaranteed to help" fund and a "moonshot" fund I would definitely donate to the latter over the former. Dividing up by cause area does not accurately separate donation targets along the lines on which I am most confident (not sure if that makes sense). I would much rather donate to a fund run by a person who shares my values and beliefs than a fund for a specific cause area, because I'm likely to change my mind about which cause area is best, and perhaps the fund manager will, too, and that's okay.

Some possible axes:

  1. live-improving vs. life-saving (or, similarly, total view vs. person-affecting view)
  2. safe bets vs. moonshots
  3. suffering-focused vs. "classical"
  4. short-term vs. far future

Although having all possible combinations just along these axes would require 16 funds so in practice this won't work exactly as I've described.

Comment author: Kerry_Vaughan 02 June 2017 05:00:58PM 3 points [-]

This is an interesting idea. I have a few hesitations about it, however:

  1. The number of organizations which are doing cause prioritization and not also doing EA Community Building is very small (I can't think of any off the top of my head).
  2. My sense is that Nick wants to fund both community building and cause prioritization, so splitting these might place artificial constraints on what he can fund.
  3. EA Community building has the least donations so far ($83,000). Splitting might make the resulting funds too small to be able to do much.
Comment author: MichaelDickens  (EA Profile) 03 June 2017 06:39:32AM 1 point [-]

RE #1, organizations doing cause prioritization and not EA community building: Copenhagen Consensus Center, Foundational Research Institute, Animal Charity Evaluators, arguably Global Priorities Project, Open Philanthropy Project (which would obviously not be a good place to donate, but still fits the criterion).

RE #2: if the point is to do what Nick wants, it should really be a "Nick Beckstead fund", not an EA Community fund.

Comment author: Peter_Hurford  (EA Profile) 30 May 2017 12:12:53AM 0 points [-]

Indeed, how can you even ever know which works better?

Retrospective analysis of track record? Looking into Tetlock-style research?

Comment author: MichaelDickens  (EA Profile) 30 May 2017 02:42:42PM 0 points [-]

Suppose it's 10 years in the future, and we can look back at what ACE and MIRI have been doing for the past 10 years. We now know some new useful information, such as:

  • Has ACE produced research that influenced our understanding of effective charities?
  • Has MIRI published new research that moved us closer to making AI safe?
  • Has ACE moved more money to top animal charities?

But even then, we still don't know nearly as much as we'd like. We don't know if ACE really moved money, or if that money would have been donated to animal charities anyway. Maybe MIRI took funding away from other research avenues that would have been more fruitful. We still have no idea how (dis)valuable the far future will be.

Comment author: MichaelDickens  (EA Profile) 29 May 2017 11:34:34PM 1 point [-]

I'm still undecided on the question of whether quantitative models can actually work better than qualitative analysis. (Indeed, how can you even ever know which works better?) But very few people actually use serious quantitative models to make decisions--even if quantitative models ultimately don't work as well as well-organized qualitative analysis, they're still underrepresented--so I'm happy to see more work in this area.

Some suggestions on ways to improve the model:

Account for missing components

Quantitative models are hard, and it's impossible to construct a model that accounts for everything you care about. I think it's a good idea to consider which parts of reality you expect to matter most for the impact of a particular thing, and try to model those. Whatever your model is missing, try to figure out which parts of that matter most. You might decide that some things are too hard to model, in which case you should consider how those hard-to-model bits will likely affect the outcome and adjust your decision accordingly.

Examples of major things left out:

  • 80K model only considers impact in terms of new donations to GWWC based on 80K's own numbers. It would be better to use your own models of the effectiveness of different cause areas and account for how many people 80K moves into/away from these cause areas using your own effectiveness estimates for different causes.
  • ACE model only looks at the value from moving money among top charities. My own model includes money moved among top charities plus new money moved to top charities plus the value of new research that ACE funds.

Sensitivity analysis

The particular ordering you found (80K > MIRI > ACE > StrongMinds) depends heavily on certain input parameters. For example, for your MIRI model, "expected value of the far future" is doing tons of work. It assumes that the far future contains about 10^17 person-years; I don't see any justification given. What if it's actually 10^11? Or 10^50? This hugely changes the outcome. You should do some sensitivity analysis to see which inputs matter the most. If any one input matters too much, break it down into less sensitive inputs.

View more: Next