Comment author: Gregory_Lewis 11 July 2018 03:05:26PM 4 points [-]

One key challenge I see is something like 'grant-making talent constraint'. The skills needed to make good grants (e.g. good judgement, domain knowledge, maybe tacit knowledge, maybe relevant network, possibly commissioning/governance/operations skill) are not commonplace, and hard to explicitly 'train' outside i) having a lot of money of your own to practise with, or ii) working in a relevant field (so people might approach you for advice). (Open Philanthropy's recent hiring round might provide another route, but places were limited and extraordinarily competitive).

Yet the talents needed to end up at (i) or (ii) are somewhat different, as are the skills to acquire: neither (e.g.) having a lot of money and being interested in AI safety, nor being an AI safety researcher oneself, guarantee making good AI safety grants; time one spends doing either of these things is time one cannot dedicate to gaining grant-making experience.

Dividing this labour (as the suggestions in the OP point towards) seem the way to go. Yet this can only get you so far if 'grantmaking talent' is not only limited among people with the opportunity to make grants, but limited across the EA population in general. Further, good grant-makers will gravitate to the largest pools of funding (reasonably enough, as this is where their contribution has the greatest leverage). This predictably leads to gaps in the funding ecosystem where 'good projects from the point of view of the universe' and 'good projects from the point of view of the big funders' subtly differ: I'm not sure I agree with the suggestions in the OP (i.e. upskilling people, new orgs), but I find Carl Shulman's remarks here persuasive.

Comment author: alexherwix 11 July 2018 02:47:56PM 2 points [-]

Dear Halstead,

thank you for the effort updated information on effective climate charities is a great and valuable thing to me and probably many other EAs.

However, I had a look at the website and the report but I couldn't really find the discussion of why you do not recommend Cool Earth (searched for the name Cool Earth and only found one unrelated mention). As a past donor to that charity it would be awesome to have a direct link to that information.

Additionally, without having read the report in detail, I think it would be a great addition if you wouldn't exclusively focus on the selected recommendations but position them in context to the other options. That way I could more easily understand if I agree with your selection.

Anyhow, thank you for posting this and investing the time and effort to make this information accessible to a broader audience!

Cheers, Alex

In response to comment by remmelt  (EA Profile) on Open Thread #40
Comment author: Denkenberger 11 July 2018 01:46:23PM 2 points [-]

I like that the forum is not sorted so one can keep abreast of the major developments and debates in all of EA. I don't think there is so much content as to be overwhelming.

Comment author: Halstead 11 July 2018 01:42:28PM *  6 points [-]

Thanks for getting the conversation going on this topic, which hasn't received enough systematic attention by EAs. An excellent treatment of this issue is given by Paul Brest here - https://ssir.org/articles/entry/impact_investing . This suggests that the prospects of achieving market rate returns and having social impact are dim. One may be able to have counterfactual impact by accepting below market returns or at the extreme providing a grant to a company. (Open Phil has invested in Impossible Foods, presumably accepting below market returns).

One observation I have is that there is a big step between showing that impact investing might work in some conditions and actually finding good opportunities. It seems like identifying good opportunities would take serious up a lot of serious research time - of the same order we would expect to identify a recommended GW charity. A glance through some impact investing platforms suggests they offer quite shallow analysis of enterprises that look unlikely to be effective. So, I think we should acknowledge that this space is worth exploring but be very sceptical about any particular opportunity, whether that be Wave, World Tree or whatever

Comment author: Halstead 11 July 2018 01:28:23PM *  1 point [-]

If the solar company is offering below market returns, then an impact investment is equivalent to a grant to that company. This opens up some space for impact investing to have counterfactual impact, provided the investment opportunity stands a decent chance of success.

Comment author: kbog  (EA Profile) 11 July 2018 12:32:16PM *  4 points [-]

If capital markets are efficient and most people aren't impact investors, then there is no benefit to impact investing, as the coal company can get capital from someone else for the market rate as soon as you back out, and the solar company will lose most of its investors unless it offers a competitive rate of return. At the same time, there is no cost to impact investing.

In reality I think things are not always like this, but not only does inefficiency imply that impact investing has an impact, it also implies that you will get a lower financial return.

For most of us, our cause priorities are not directly addressed by publicly traded companies, so I think impact investing falls below the utility/returns frontier set by donations and investments. You can pick a combination of greedy investments and straight donations that is Pareto superior to an impact investment. If renewable energy for instance is one of your top cause priorities, then perhaps it is a different story.

Comment author: kbog  (EA Profile) 11 July 2018 12:09:16PM *  1 point [-]

What if AI exploring moral uncertainty finds that there is provably no correct moral theory or right moral facts?

In that case it would be exploring traditional metaethics, not moral uncertainty.

But if moral uncertainty is used as a solution then we just bake in some high level criteria for the appropriateness of a moral theory, and the credences will necessarily sum to 1. This is little different from baking in coherent extrapolated volition. In either case the agent is directly motivated to do whatever it is that satisfies our designated criteria, and it will still want to do it regardless of what it thinks about moral realism.

Those criteria might be very vague and philosophical, or they might be very specific and physical (like 'would a simulation of Bertrand Russell say "a-ha, that's a good theory"?'), but either way they will be specified.

Comment author: RyanCarey 11 July 2018 11:07:25AM 0 points [-]

The concept starts with a website that has a fully digital grant application process. Applicants create user accounts that let them edit applications, and applicants can choose from a variety of options like having the grant be hidden or publicly displayed on the website, and posting under their real names or a pseudonym. Grants have discussions sections for the public to give feedback. Anonymous project submission help people get feedback without reputation risk and judge project funding potential before committing significant time and resources to a project. If the applicant opts to make an application public, it is displayed for everyone to see and comment on. Anyone can contact the project creator, have a public or private discussion on the grant website, and even fund a project directly.

What does this achieve that Google Docs linked from the EA Forum can't achieve? I think it should start with a more modest MVP that works within existing institutions and more extensively leverages existing software products.

The website is backed by a centralized organization that decides which proposals to fund via distributed grantmaking. Several part-time or full-time team members run the organization and assess the quality and performance of grantmakers. EAs in different cause areas can apply to be grantmakers. After an initial evaluation process, beginner grantmakers are given a role like “grant advisor” and given a small grantmaking budget. As grantmakers prove themselves effective, they are given higher roles and a larger grantmaking budget.

This sounds good.

While powered by dencentralized grantmakers, the organization has centralized funding options for donors that do not want to evaluate grants themselves.

I'm not sure what you mean by "centralized funding options"

Donations can be tax-deductible, non-tax-deductible, or even structured as impact investments into EA initiatives. Donors can choose cause areas to fund, and can perhaps even fund individual grantmakers.

This sounds good.

Comment author: Denise_Melchin 11 July 2018 09:41:10AM 1 point [-]

I agree collaboration between the various implementations of the different ideas is valuable and it can be good to help out technically. I'm less convinced of starting a fused approach as an outsider. As Ryan Carey said, most important for good work in this field is i) having people good at grantmaking i.e. making funding decisions ii) the actual money.

Thinking about approaches how to ideally handle grantmaking without having either strikes me as putting the cart before the horse. While it might be great to have a fused approach, I think this will largely be up to the projects who have i) and ii) whether they wish to collaborate further, though other people might be able to help with technical aspects.

Comment author: Gregory_Lewis 11 July 2018 06:14:39AM 1 point [-]

I agree history generally augurs poorly for those who claim to know (and shape) the future. Although there are contrasting positive examples one can give (e.g. the moral judgements of the early Utilitarians were often ahead of their time re. the moral status of women, sexual minorities, and animals), I'm not aware of a good macrohistorical dataset that could answer this question - reality in any case may prove underpowered.

Yet whether or not in fact things would change with more democratised decision-making/intelligence gathering/ etc., it remains an open question whether this would be a better approach. Intellectual progress in many areas is no longer an amateur sport (see academia, cf. ongoing professionalisation of many 'bits' of EA, see generally that many important intellectual breakthroughs have historically been made by lone figures or small groups versus more swarm-intelligence-esque methods), and there's a 'clownside' risk of lot of enthusiastic, well-meaning, but inexperienced people making attempts that add epistemic heat rather than light (inter alia). The bar to appreciate 'X is an important issue' may be much lower than 'can contribute usefully to X'.

A lot seems to turn on whether the relevant problems are more high serial depth (favouring intensive effort) high threshold (favouring potentially-rare ability) or broader and relatively shallower, favouring parallelization. I'd guess relevant 'EA open problems' are a mix, but this makes me hesitant for there to be a general shove in this direction.

I have mixed impressions about the items you give below (which I appreciate was meant more as quick illustration than some 'research agenda for the most important open problems in EA'). Some I hold resilient confidence the underlying claim is false, for more I am uncertain yet I suspect progress on answering these questions (/feel we could punt on these for our descendants to figure out in the long reflection). In essence, my forecast is that this work would expectedly tilt the portfolios, but not so much to be (what I would call) a 'cause X' (e.g. I can imagine getting evidence which suggests we should push more of a global health portfolio to mental health - or non-communicable disease - but not something as decisive where we think we should sink the entire portfolio there and withdraw from AMF/SCI/etc.)

Comment author: remmelt  (EA Profile) 11 July 2018 05:20:37AM *  1 point [-]

Hi @Naryan,

I’m glad that this is a more powerful tool for you.

And kudos for working things from the foundations up! Personally, I still need to take a few hours with a pen and paper to systematically work myself through the decision chain myself. A friend has been nudging me to do that. :-)

Gregory Lewis makes the argument above that some EAs are moving in the direction of working on long term future work and few are moving back out. I’m inclined to agree with him that they probably have good reasons for that.

I’d also love to see the results of some far mode — near mode questions put in the EA Survey or perhaps send out by Spencer Greenberg (not sure if there’s an existing psychological scale to gauge how much people are in each mode when working throughout the day). And of course, how they corellate with cause area preferences.

Max Dalton explained to me how ‘corrigiblity’ was one of the most important traits to look for for selecting people you want to work with at EA Global London last year so credit to him. :-) My contribution here is adding the distinction that people often seem more corrigible at some levels than others, especially when they’re new to the community.

(also, I love that sentence – “if the exploratory folks at the bottom raised evidence up the chain...”)

Comment author: remmelt  (EA Profile) 11 July 2018 04:52:41AM 0 points [-]

Great! Cool to hear how you’re already making traction on this.

Perhaps EAWork.club has potential as a launch platform?

I’d also suggest emailing Kerry Vaughan from EA Grants to get his perspective. He’s quite entrepreneurial so probably receptive to hearing new ideas (e.g. he originally started EA Ventures, though that also seemed to take the traditional granting approach).

Let me know if I can be of use!

In response to comment by Peter_Hurford  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 11 July 2018 04:17:51AM *  0 points [-]

I suspect it's basically impossible to model all the relevant far-future considerations in a way that feels believable (i.e. high confidence that the sign of all considerations is correct, plus high confidence that you're not missing anything crucial).

...the effect of AMF is still net positive.

I share this intuition, but "still net positive" is a long way off from "most cost-effective."

AMF has received so much scrutiny because it's a contender for the most cost-effective way to give money – I'm skeptical we can make believable claims about cost-effect when we take the far future into account.

I'm more bullish about assessing the sign of interventions while taking the far future into account, though that still feels fraught.

In response to comment by Peter_Hurford  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 11 July 2018 04:12:44AM 1 point [-]

A lot of big events of in my life have had pretty in-the-moment-trivial-seeming things in the causal chains leading up to them. (And the big events appear contingent on the trivial-seeming parts of the chain.)

I think this is the case for a lot of stuff in my friends' lives as well, and appears to happen a lot in history too.

It's not the far future, but the experience of regularly having trivial-seeming things turn out to be important later on has built my intuition here.

Comment author: Peter_Hurford  (EA Profile) 11 July 2018 01:54:55AM 1 point [-]

Separately, I'd wager that I feel pretty confident that taking into account all the possible long-term effects I can think of (population ethics, meat eating, economic development, differential technological development), that the effect of AMF is still net positive. I wonder if you really can model all these things? I previously wrote about five ways to handle flow-through effects in analysis and like this kind of weighted quantitative modeling.

Comment author: Peter_Hurford  (EA Profile) 11 July 2018 01:51:52AM 1 point [-]

I recently played two different video games with heavy time-travel elements. One of the games heavily implied that choosing differently made small differences for a little while but ultimately didn't matter in the grand scheme of things. The other game heavily implied that even the smallest of changes could butterfly effect into dramatically different changes. I kind of find both intuitions plausible so I'm just pretty confused about how confused I should be.

I wish there was a way to empirically test this, other than with time travel.

Comment author: Brendon_Wong 11 July 2018 12:28:03AM 2 points [-]

Thanks for the insight Remmelt! A good way to start this would be to create an MVP much like Ryan Carey suggested so that we can get started quickly, with a prebuilt application system (Google Forms, Google Docs, a forum, etc) and possibly using a DAF or fiscal sponsor. The web app itself could take a while, but having public projects and public feedback in a forum or something would be reasonably close and take much less effort.

I am meeting with someone who has made some progress in this area early next week. Based on traction and the similarity between the other person's system and this system, I'll see if a new venture in this space could add value, or if existing projects in this space have a good chance of succeeding. One way or the other I'll be in touch!

Comment author: Brendon_Wong 11 July 2018 12:16:46AM *  1 point [-]

If EA Funds wants an effortless "zero risk" option to hold the cash, putting all of the money in a high yield business saving account looks like the way to go. This would probably only take several hours to set up.

According to various online reviews "Community Bank of Pleasant Hill Business Premier Money Management Account" seems the best, and "Goldwater Bank Savings Plus Personal & Business Account" looks good as well. Free withdrawals seem to be limited to twice a month but the withdrawal fee is pretty negligible relative to learning $20,000 in annual interest.

To increase yield, using CDs is an easy next step. Otherwise, opening a brokerage account and putting the capital into a money market fund or a short term bond fund would be a relatively low risk and higher yielding option.

In response to comment by Peter_Hurford  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 11 July 2018 12:16:23AM *  1 point [-]

absolute believability is low. There's also an interesting meta-question...

I think the crux here is that absolute believability is low, such that you can't really trust the output of your analysis.

Agree the meta-question is interesting :-)

In response to comment by Peter_Hurford  (EA Profile) on Open Thread #40
Comment author: Milan_Griffes 11 July 2018 12:15:00AM 0 points [-]

I'd argue we don't necessarily know yet whether this is true. It may well be true, but it may well be false.

I think it's almost certainly true (confidence ~90%) that far future effects account for the bulk of impact for at least a substantial minority of interventions (like at least 20%? But very difficult to quantify believably).

Also seems almost certainly true that we don't know for which interventions far future effects account for the bulk of impact.

View more: Prev | Next