J

JanB

978 karmaJoined May 2017

Comments
74

FWIW, I am excited about Future Matters. I have experienced them as having great perspectives on how to affect change via policy and how to make movements successful and effective. I think they have a sufficiently different lense and expertise from many EA orgs that I'm really happy to have them working on these causes. I've also repeatedly donated to them over the years (one of my main donation targets)

Scott said in his email that OpenPhil is only taking donations >$250,000. Is this still true?

That makes sense, thanks. Although this will not apply to organisations/individuals that were promised funds from the Future Fund but didn't receive any, right? This case is pretty common, AFAICT.

Answer by JanBDec 05, 20222
0
0

Scott has sent me the following email (reproduced here with his approval). Scott wants to highlight that he doesn't know anything more than reading the public posts on this issue.

I'd encourage people to email Scott, it's probably good for someone to have a list of interested donors.
 

 

------------------------------------
Scott's email:

SHORT VERSION

 

If you want to donate blindly and you can afford more than $250K, read here for details, then consider emailing Open Philanthropy at  inquiries@openphilanthropy.org . If less than $250K, read here for details, then consider emailing Nonlinear at katwoods@nonlinear.org. You might want to check the longer section below for caveats first.


 

If you want to look over the charities available first, you can use the same contact info, or wait for them to email you. I did send them the names and emails of those of you who said you wanted to focus on charities in specific areas or who had other conditions. I hope they'll get back to you soon, but they might not; I'm sure they appreciate your generosity but they're also pretty swamped. 
 


 

LONG VERSION (no need to read this if you're busy, it just expands on the information above)


Two teams have come together to work on this problem - one from Open Philanthropy Project, and one from Nonlinear.

I know Open Philanthropy Project well, and they're a good and professional organization. They're also getting advice from the former FTX Future Fund team (who were foundation staff not in close contact with FTX the company; I still trust them, and they're the experts on formerly FTX-funded charities). For logistical reasons they're limiting themselves to donors potentially willing to contribute over $250,000.

I don't know Nonlinear well, although a former ACX Grants recipient works there and says good things about it. Some people in the EA Forum have expressed concerns about them - see https://forum.effectivealtruism.org/posts/L4S2NCysoJxgCBuB6/announcing-nonlinear-emergency-funding , I have no context for this besides the comments there. They don't seem to have a minimum donation. I'm trying to get in touch with them to learn more.

Important consideration: these groups are trying to maximize two imperatives. First, the usual effective altruism do-as-much-good-as-possible imperative. But second, an imperative to protect the reputation of the EA ecosystem as a safe and trustworthy place to do charity work, where your money won't suddenly disappear, or at least somebody will try to help if it does. I think this means they will be unusually willing to help charities victimized by the FTX situation even if these would seem marginal by their usual quality standards. I think this is honorable, but if you're not personally invested in the reputation of the EA ecosystem you might want to donate non-blindly or look elsewhere.

Also, FTX Future Fund focused disproportionately on biosecurity, pandemic prevention, forecasting, AI alignment, and other speculative causes, so most of the charities these teams are trying to rescue will probably be in those categories. If you don't want to be mostly funding those, donate non-blindly or look elsewhere.

I've given (or will shortly give) both groups your details; they've promised to keep everything confidential and not abuse your emails. If they approach you in any way that seems pushy or makes you regret interacting with them, please let me know so I can avoid working with them in the future.

I can't give you great answers on ACX Grants now, but I'll hopefully know more soon, and if things don't work out with this opportunity I'd be happy to work with you further then.

Thanks again for your generosity, and please let me know if you have any questions.

Yours,
Scott

Thanks for investigating this and producing such an extremely thorough write-up, very useful!

JanB
2y81
39
0

I haven't read the comments and this has probably been said many times already, but it doesn't hurt saying it again:
From what I understand, you've taken significant action to make the world a better place. You work in a job that does considerable good directly, and you donate your large income to help animals. That makes you a total hero in my book :-)

At the same time though, it seems like your objection is a fully general argument against fundamental breakthroughs ever being necessary at any point, which seems quite unlikely. 

Sorry, what I wanted to say is it seems unclear if fundamental breakthroughs are needed. They might be needed, or not. I personally am pretty uncertain about this and think that both options are possible. I think it's also possible that any breakthroughs that will happen won't change the general picture described in the OP much.

I agree on the rest of your comment!

I gave the comment a strong upvote because it's super clear and informative. I also really appreciate it if people spell out their reasons for "scale in not all you need", which doesn't happen that often.

That said, I don't agree with the argument or conclusion. Your argument, at least as stated, seems to be "tasks with the following criteria are hard for current RL with human feedback, so we'll need significant fundamental breakthroughs". The transformer was published 5 years ago. Back then, you could have used a very analogous argument about language models to argue that language models will never do this or that task; but for many of these tasks, language models can perform them now (emergent properties).

Yes, you can absolutely apply for conference and compute funding, separately from an application for salary, or in combination. E.g. if you're applying for salary funding anyway, it would be very common and normal to also apply for funding for a couple of conferences, equipment that you need, and compute. I think you would probably go for cloud compute, but I haven't thought about it much. 

Sometimes this can give mild tax issues (if you get the grant in one year but only spend the money on the conference in the next year; or, in some countries, if you just receive the funding as a private person and therefore can't subtract expenses).

Some organisations also offer funding via prepaid credit cards, e.g. for compute.


Maybe there are also other options, like getting an affiliation with some place and using their servers for compute, but often this will be hard.

Answer by JanBAug 12, 20222
0
0

I think you could apply for funding from a number of sources. If the budget is small, I'd start with the Longterm Future Fund: https://funds.effectivealtruism.org/funds/far-future

Load more