New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles. However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness. While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

73
20

TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.

AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...

Continue reading
7
mildlyanonymous
1h
I think this is overly simplifying of something a lot more complex, and I’m surprised it’s a justification you use for this. Of course on some level what you’re saying is correct in many cases. But imagine you recommend a global health charity to be launched. GiveWell says “you’re misinterpreting some critical evidence, and this isn’t as impactful as you think”. Charities on the ground say “this will impact our existing work, so try doing it this other way”. You launch the intervention anyway. The founders immediately get the same feedback, including from trying the intervention, then pivot to coordinating more and aligning with external experts. This seems much more analogous to what happens in the animal space, and it seems absolutely like a good indicator that people were skeptical. Charities aren’t for-profits, who exist in a vacuum of their own profitability. They are part of a broader ecosystem.
1
mildlyanonymous
1h
I definitely agree that organizations should pivot as they learn about how an intervention works in practice. I think the errors I refer to are more things of the type: a cursory glance from an animal welfare scientist could have told you your research was missing key considerations, and the charity would have not wasted time on the recommended intervention. These seem cheap to prevent and preventable issues.

Thanks for clarifying! We always have an expert view section in the report, and often consult animal science specialists, but it is possible we missed something. Could you tell me where specifically we made a mistake regarding animal science that could have changed the recommendation? I want to look into it, to fact-check it, and if it is right not to make this mistake in the future. 

11
4

If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.

Thank you, M, for sharing this with me & encouraging me to connect.

Continue reading

I am a GiveWell donor because I want to spend money to improve the world. Should I do something else with that money instead? If so, what?

1Answer by Oscar Delaney7h
My understanding is you are unsupportive of earning-to-give. I agree the trappings of expensive personal luxuries are both substantively bad (often) and poor optics. But the core idea that some people are very lucky and have the opportunity to earn huge amounts of money which they can (and should) then donate, and that this can be very morally valuable, seems right to me. My guess is that regardless of your critiques of specific charities (bednets, deworming, CATF) you still think there are morally important things to do with money. So what do you think of ETG - why is the central idea wrong (if you indeed think that)?
5Answer by Rebecca11h
1. What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number? 2. Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse - defining terms unique is very common. 3. Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional? 1. Does this mean you think prediction markets don’t end up working in practice to hold people to their track records of mid-probability predictions?

If anyone wants to see what making EA enormous might look like, check out Rutger Bregmans' School for Moral Ambition (SMA). 

It isn't an EA project (and his accompanying book has a chapter on EA that is quite critical), but the inspiration is clear and I'm sure there...

Continue reading

Thanks! IIRC, we focused on it substantially because a lot of the sign ups for our programmes (e.g. online course) were coming from LinkedIn even when we hadn't put much effort into it. The number of sign ups and the proportion attributed to LinkedIn grew as we put more effort into it. This was mostly the work of our wonderful Marketing Manager, Ana. I don't have access to recent data or information about how it's gone to make much of a call on whether it was worth it, relative to other possible uses of our/Ana's time.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Summary[1]

  • I introduce an analogy between present-day NIMBYs opposing local development, fearing it will make their neighbourhood worse, and ‘cosmic NIMBYs’ opposing making the future large, fearing it will lower the welfare of existing people.
  • On ~totalist population axiologies
...
Continue reading

Thanks for writing this up, Oscar! I largely disagree with the (admittedly tentative) conclusions, and am not sure how apt I find the NIMBY analogy. But even so, I found the ideas in the post helpfully thought-provoking, especially given that I would probably fall into the cosmic NIMBY category as you describe it. 

First, on the implications you list. I think I would be quite concerned if some of your implications were adopted by many longtermists (who would otherwise try to do good differently):

Support pro-expansion space exploration policies and laws

... (read more)

Summary: Even from an anti-realist stance on morality, there are various reasons we might expect moral convergence in practice.

[Largely written two years ago; cleaned up for draft amnesty week. The ideas benefited from comments and conversations with many people; errors...

Continue reading
4
Wei Dai
4h
1. You seem to be assuming that people's extrapolated views in world A will be completely uncorrelated with their current views/culture/background, which seems a strange assumption to make. 2. People's extrapolated views could be (in part) selfish or partial, which is an additional reason that extrapolated views of you at different times may be closer than that of strangers. 3. People's extrapolated views not converging doesn't directly imply "it’s much much less likely that the world we end up with even if we save it is close to the ideal one by my lights" because everyone could still get close to what they want through trade/compromise, or you (and/or others with extrapolated views similar to yours) could end up controlling most of the future by winning the relevant competitions. 4. It's not clear that applying a heavy discount to world A makes sense, regardless of the above, because we're dealing with "logical risk" which seems tricky in terms of decision theory.

4 is a great point, thanks.

On 1--3, I definitely agree that I may prudentially prefer some possibilities than others. I've been assuming that from a consequentialist moral perspective the distribution of future outcomes still looks like the one I give in this post, but I guess it should actually look quite different. (I think what's going on is that in some sense I don't really believe in world A, so haven't explored the ramifications properly.)

[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)


 

Continue reading

For what it's worth, I would find the first part of the issue (i.e. making the player "floating" or "sticky") already quite useful, and it seems much easier to implement.

As someone who works with software engineers, I have respect for how simple-appearing things can actually be technically challenging.

Yanni Kyriacos posted a Quick Take 15h ago

It breaks my heart when I see eulogy posts on the forum. And while I greatly appreciate people going to the effort of writing them (while presumably experiencing grief), it still doesn't feel like enough. We're talking about people that dedicated their lives to doing good, and all they get is a post. I don't have a suggestion to address this 'problem', and some may even feel that a post is enough, but I don't. Maybe there is no good answer and death just sucks. I dunno.

Continue reading

Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than...

Continue reading

I really appreciated this report, it seemed one of the most honest and open communications to come out of Open Philanthropy, and it helped me connect with your priorities and vision. A couple of specific things I liked.

I appreciated the comment about the Wytham Abby purchase, recognising the flow on effects Open Phil decisions can have on the wider community, and even just acknowledging a mistake - something which is both difficult and uncommon in leadership.

"But I still think I personally made a mistake in not objecting to this grant back when the initial... (read more)

25
Vasco Grilo
17h
Hello again Alex, You discuss the allocation of funds across your 2 main areas, global health and wellbeing (GHW) and global catastrophic risks (GCR), but (as before) you do not say anything about the allocation across animal and human interventions in the GHW portfolio. I assume you do not think the funding going towards animal welfare interventions should be greatly increased, but I would say you should at least be transparent about your views. For reference, I estimate the cost-effectiveness of corporate campaigns for chicken welfare is 13.6 DALY/$ (= 0.01*1.37*10^3), i.e. 680 (= 13.6/0.02) times Open Philanthropy's bar. I got that multiplying: * The cost-effectiveness of GiveWell's top charities of 0.01 DALY/$ (50 DALY per 5 k$), which is half of Open Philanthropy's bar of 0.02 DALY/$. * My estimate for the ratio between cost-effectiveness of corporate campaigns for chicken welfare and GiveWell's top charities of 1.37 k (= 1.71*10^3/0.682*2.73/5): * I calculated corporate campaigns for broiler welfare increase neaterm welfare 1.71 k times as cost-effectively as the lowest cost to save a life among GiveWell’s top charities then of 3.5 k$, respecting a cost-effectiveness of 0.286 life/k$ (= 1/(3.5*10^3)). * The current mean reciprocal of the cost to save a life of GiveWell’s 4 top charities is 0.195 life/k$ (= (3*1/5 + 1/5.5)*10^-3/4), i.e. 68.2 % (= 0.195/0.286) as high as the cost-effectiveness I just mentioned. * The ratio of 1.71 k in the 1st bullet respects campaigns for broiler welfare, but Saulius estimated ones for chicken welfare (broilers or hens) affect 2.73 (= 41/15) as many chicken-years. * OP thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis”.
25
Vasco Grilo
17h
Thanks for the update, Alex! Could you elabotate on the influence of Cari and Dustin on your grantmaking (see what I have highlighted below), ideally by giving concrete examples?

This is a cross-post from the CGD Blog. For the original post and downloadable Note please visit: https://www.cgdev.org/publication/1-trillion-paradox-why-reforming-research-publishing-should-be-global-priority .

----

Our research system is a perplexing paradox. Each year, approximately $1 trillion of public funds are spent on research worldwide. Whole careers are spent making incremental improvements to research methods. Hundreds of millions of dollars are spent on a single clinical trial. And yet, the global system for sharing research results is a costly mess. Rooted in antiquated journal structures and marred by market failures, our research systems prioritise profit at the expense of accessibility, equity, and affordability, hindering our ability to fully reap the benefits of research. 

A prior CGD blog argued that research reform is a critical issue for global development and...

Continue reading

Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.

However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness.

While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.

Continue reading