Comment author: Eli_Nathan 07 August 2018 07:09:53PM 4 points [-]

Thanks Marek,

I remember some suggestions a while back to store the EA funds cash (not crypto) in an investment vehicle rather than in a low-interest bank account. One benefit to this would be donors feeling comfortable donating whenever they wish, rather than waiting for the last possible minute when funds are to be allocated (especially if the fund manager does not have a particular schedule). Just wondering whether there's been any thinking on this front?

Comment author: SamDeere 07 August 2018 07:39:10PM 3 points [-]

Hey Eli – there has definitely been thinking on this, and we've done a shallow investigation of some options. At the moment we're trying to avoid making large structural changes to the way EA Funds is set up that have the potential to increase accounting complexity (and possibly audit compliance complexity too), but this is in the pipeline as something we'd eventually like to make happen, especially as the total holdings get larger.

Comment author: Peter_Hurford  (EA Profile) 24 July 2018 11:03:43PM 6 points [-]

Hey Nick,

I'm excited to hear you've made a bunch of grants. Do you know when they'll be publicly announced?

Comment author: SamDeere 30 July 2018 09:42:36PM *  13 points [-]

The grant payout reports are now up on the EA Funds site:

Note that the Grant Rationale text is basically the same for both as Nick has summarised his thinking in one document, but the payout totals reflect the amount disbursed from each fund

Comment author: alexherwix 20 July 2018 10:22:37AM *  4 points [-]

Thank you Marek and the whole CEA team for taking on this project! I love your initiative and what you outline seems like a very valuable and necessary step for the EA community. If things work out as you imagine, EA could be one of the first science-driven communities with a strong "community-reviewed" journal type offering (in this vein it may make sense to introduce different types of "publications" – idea, project report, scientific publication, etc. – with different standards for review and moderation). Very inspiring!

A question that comes to my mind would be your plans and stance on making user profiles/data accessible to external partners and integrations. For example, I am investing some time into thinking about the funding pipeline in EA right now, in particular with a focus on small scale projects which seem to be falling through the cracks right now. Having a funding platform integrate with the community system and trust measures of the EA forum could be a game changer for this (for people interest in this topic get in touch on the rethink slack #ti-funding or https://gitlab.com/effective-altruism/funding-pipeline – it's not much put down right now, but there are already some people interested in this space). Given that the Less Wrong 2.0 codebase is open source it should be possible to develop secure means of integration between different platforms if the provider of the forum enables it. Did you consider these kind of long-term use cases in your planning so far? Do you have a vision for how collaboration with "non-CEA" affiliated projects could look in the future?

Comment author: SamDeere 21 July 2018 04:07:00PM 2 points [-]

Two thoughts, one on the object-level, one on the meta.

On the object level, I'm skeptical that we need yet another platform for funding coordination. This is more of a first-blush intuition, and I don't propose we have a long discussion on it here, but just wanted to add my $0.02 as a weak datapoint. (Disclosure — I'm part of the team that built EA Funds and work at CEA which runs EA Grants so make of that what you will. Also, to the extent that the sense that small projects are falling through the gaps because of evaluation-capacity constraints, CEA is currently in the process of hiring a Grants evaluator.)

On the meta level (i.e. how open should we be to adding arbitrary integrations that can access a user's forum account data) I think there's definitely some merit to this, and that I can envisage cool things that could be built on top of it. However, my first-blush take is that providing an OAuth layer, exposing user data etc, is unlikely to be a very high priority (at least from the CEA side) when considered against other possible feature improvements and other CEA priorities, especially given the likely time cost involved in maintaining the auth system where it interfaces with other services, and the magnitude of the impact that I'd expect having EA Forum data integrated with such a service would have. However, as you note, the LW codebase is open source, so I'd suggest submitting an issue there, discussing with the core devs and making the case, and possibly submitting a PR if it's something that would be sufficiently useful to a project you're working on.

Comment author: Marcus_A_Davis 20 July 2018 01:46:54AM 6 points [-]

I think the proposed karma system, particularly when combined with the highly rated posts being listed higher, is a quite bad idea. In general, if you are trying to ensure quality of posts and comments while spreading the forum out more broadly there are hard tradeoffs with different strengths and weaknesses. Indeed, I might prefer some type of karma weighting system to overly strict moderation but even then the weights proposed here don't seem justifiable.

What problem is being solved by giving up to 16 times maximum weight that would not be solved with giving users with high karma "merely" a maximum of 2 times the amount of possible weight? 4 times?

However, we obviously don’t want this to become a tyranny of a few users. There are several users, holding very different viewpoints, who currently have high karma on the Forum, and we hope that this will help maintain a varied discussion, while still ensuring that the Forum has strong discussion standards.

While it may be true now that there are multiple users with high karma with very different viewpoints, any imbalance among competing viewpoints at the start of a weighted system could possibly feedback on itself. That is to say, if viewpoint X has 50% of the top posters (by weight in the new system), Y has 30%, and Z 20%, viewpoint Z could easily see their viewpoint shrink relative to the others because the differential voting will compound itself over time.

Comment author: SamDeere 21 July 2018 01:32:16AM 5 points [-]

Thanks for the comments on this Marcus (+ Kyle and others elsewhere).

I certainly appreciate the concern, but I think it's worth noting that any feedback effects are likely to be minor.

As Larks notes elsewhere, the scoring is quasi-logarithmic — to gain one extra point of voting power (i.e. to have your vote be able to count against that of a single extra brand-new user) is exponentially harder each time.

Assuming that it's twice as hard to get from one 'level' to the next (meaning that each 'level' has half the number of users than the preceding one), the average 'voting power' across the whole of the forum is only 2 votes. Even if you make the assumption that people at the top of the distribution are proportionally more active on the forum (i.e. a person with 500,000 karma is 16 times as active as a new user), the average voting power is still only ≈3 votes.

Given a random distribution of viewpoints, this means that it would take the forum's current highest-karma users (≈5,000 karma) 30-50 times as much engagement in the forum to get from their current position to the maximum level. Given that those current karma levels have been accrued over a period of several years, this would entail an extreme step-change in the way people use the forum.

(Obviously this toy model makes some simplifying assumptions, but these shouldn't change the underlying point, which is that logarithmic growth is slooooooow, and that the difference between a logarithmically-weighted system and the counterfactual 1-point system is minor.)

This means that the extra voting power is a fairly light thumb on the scale. It means that community members who have earned a reputation for consistently providing thoughtful, interesting content can have a slightly greater chance of influencing the ordering of top posts. But the effect is going to be swamped if only a few newer users disagree with that perspective.

The emphasis on can in the preceding sentence is because people shouldn't be using strong upvotes as their default voting mechanism — the normal-upvote variance will be even lower. However, if we thought this system was truly open to abuse, a very simple way we could mitigate this is to limit the number of strong upvotes you can make in a given period of time.

There's an intersection here with the community norms we uphold. The EA Forum isn't supposed to be a place where you unreflectively pursue your viewpoint, or about 'winning' a debate; it's a place to learn, coordinate, exchange ideas, and change your mind about things. To that end, we should be clear that upvotes aren't meant to signal simple agreement with a viewpoint. I'd expect people to upvote things they disagree with but which are thoughtful and interesting etc. I don't think for a second that there won't be some bias towards just upvoting people who agree with you, but I'm hoping that as a community we can ensure that other things will be more influential, like thoughtfulness, usefulness, reasonableness etc.

Finally, I'd also say that the karma system is just one part of the way that posts are made visible. If a particular minority view is underrepresented, but someone writes a thoughtful post in favour of that view, then the moderation team can always promote it to the front page. Whether this seems good to you obviously depends on your faith in the moderation team, but again, given that our community is built on notions like viewpoint diversity and epistemic humility, then the mods should be upholding these norms too.

Comment author: Habryka 20 July 2018 04:51:19PM 4 points [-]

Huh, I am unaware of this. Feel free to ping us on Intercom about any old posts you want deleted. The old database was somewhat inconsistent about the ways it marked posts as deleted, so there is a chance we missed some.

Comment author: SamDeere 20 July 2018 11:48:54PM 0 points [-]

Yeah MoneyForHealth, it does seem like it would be useful if you can point out instances of this happening on LW. Then we'll have a better shot at figuring out how it happened, and avoiding it happening with the EA Forum migration.

Comment author: Jan_Kulveit 19 July 2018 03:17:07PM *  14 points [-]

Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org

Risks&dangers: I think there is non-negligible chance the LW karma system is damaging the discussion and the community on LW in some subtle but important way.

Implementing the same system here makes the risks correlated.

I do not believe anyone among the development team or moderators really understands how such things influence people on the S1 level - it seems somewhat similar to likes on facebook, and it's clear likes on Facebook are able to mess up with peoples motivation in important ways. So the general impression is people are playing with something possibly powerful, likely without deep understanding, and possibly with a bad model of what the largest impacts are (focus of ordering of content, vs. subtle impacts on motivation)

In situations with such uncertainty, I would prefer the risks to be less correlated

edit: another feature request: allow to add co-authors of posts. a lot of texts are created by multiple people, ant it would be nice if all the normal functionality worked

Comment author: SamDeere 20 July 2018 11:29:27PM 2 points [-]

Implementing the same system here makes the risks correlated.

The point re correlation of risks is an interesting one — I've been modelling the tight coupling of the codebases as a way of reducing overall project risk (from a technical/maintenance perspective), but of course this does mean that we correlate any risks that are a function of the way the codebase itself works.

I'm not sure we'll do much about that in the immediate term because our first priority should be to keep changes to the parent codebase as minimal as possible while we're migrating everything from the existing server. However, adapting the forum to the specific needs of the EA community is something we're definitely thinking about, and your comment highlights that there are good reasons to think that such feature differences have the important additional property of de-correlating the risks.

Feature request: integrate the content from the EA fora into LessWrong in a similar way as alignmentforum.org

That's unfortunately not going to be possible in the same way. My understanding is that the Alignment Forum beta is essentially running on the same instance (server stack + database) as the LessWrong site, and some posts are just tagged as 'Alignment Forum' which makes them show up there. This means it's easier to do things like have parallel karma scores, shared comments etc.

We see the EA Forum as a distinct entity from LW, and while we're planning to work very closely with the LW team on this project (especially during the setup phase), we'd prefer to run the EA Forum as a separate, independent project. This also us the affordance to do things differently in the future if desired (e.g. have a different karma system, different homepage layout etc).

Comment author: brunoparga 16 December 2017 01:03:36AM 2 points [-]

This isn't a commitment, but, just as a curiosity: does the CEA take cryptocurrency?

Comment author: SamDeere 21 December 2017 10:23:33PM 1 point [-]

An update on this: Cryptocurrency donations are now live on the site, so you can now enter the lottery (or make a regular donation to EA Funds) using BTC, ETH and LTC

Comment author: Jess_Riedel 18 December 2017 04:18:24AM 0 points [-]

Could you explain your first sentence? What risks are you talking about?

Also, how does one lottery up further if all the block sizes are $100k? Diving it up into multiple blocks doesn't really work.

Comment author: SamDeere 18 December 2017 09:42:53PM *  1 point [-]

An alternative model for variable pot sizes is to have a much larger guarantor (or a pool of guarantors), and then run rolling lotteries. Rather than playing against the pool, you're just playing against the guarantor, and you could set the pot size you wanted to draw up to (e.g. your $1000 donation could give you a 10% shot at a $10k pot, or a 1% shot at a $100k pot). The pot size should probably be capped (say, at $150k), both for the reasons Paul/Carl outlined re diminishing returns, and to avoid pathological cases (e.g. a donor taking a $100 bet on a billion dollars etc). Because you don't have to coordinate with other donors, the lottery is always open, and you could draw the lottery as soon as your payment cleared. Rather than getting the guarantor to allocate a losing donation, you could also 'reinvest' the donations into the overall lottery pool, so eventually the system is self-sustaining and doesn't require a third-party guarantor. [update: this model may not be legally possible, so possibly such a scheme would require an ongoing guarantor]

This is more administratively complex (if only because we can't batch the manual parts of the process to defined times), but there's a more automated version of this which could be cool to run. At this stage I want validate the process of running the simpler version, and then if it's something there's demand for (and we have enough guarantor funds to make it feasible) we can look into running the rolling version sometime next year.

Comment author: RyanCarey 17 December 2017 02:32:48PM *  4 points [-]

Ideally, non-EAs can enter and win. As Carl said, on a first cut analysis, what you're doing doesn't depend on what other people do. You're simply buying a 1/m chance of donating m times your contribution, and if other EAs or non-EAs want to do the same, then all power to them.

In practice, CEA technically gets to make the final donation decision. But I can't see them violating a donor's choice.

Comment author: SamDeere 17 December 2017 11:42:10PM 5 points [-]

In practice, CEA technically gets to make the final donation decision. But I can't see them violating a donor's choice.

To emphasise this, as CEA is running this lottery for the benefit of the community, it's important for the community to have confidence that CEA will follow their recommendations (otherwise people might be reticent to participate). So, to be clear, while CEA makes the final call on the grant, unless there's a good reason not to (see the 'Caveats and Limitations' section on the EA.org Lotteries page) we'll do our best to follow a donor's recommendation, even if it's to a recipient that wouldn't normally be thought of as a strictly EA.


What happens if a non-EA wins?

It's worth pointing out that one's motivation to enter the lottery should be to win the lottery, not to put money into a pot that you in fact hope will be won and allocated by someone else better-qualified to do the research than you are. If there are people entering the lottery who you think will make better decisions than you (even in the event that you won), then you should either donate on their behalf (i.e. agree with them in advance that they can research and make the recommendation if you win), or wait for the lottery draw, and then follow their recommendation if they win.

(not implying that this necessarily is your motivation, just that "I'll donate hoping for someone else to win" is a meme that I've noticed comes up a lot when talking about the lottery and I wanted to address it)

Comment author: Carl_Shulman 16 December 2017 05:38:36PM 2 points [-]

Maybe mention that on the site? There are a lot of crypto donations happening now.

Comment author: SamDeere 17 December 2017 09:03:17AM 2 points [-]

Agreed — I'll get this updated early next week

View more: Next