New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles. However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness. While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.

Popular comments

Recent discussion

I think it's a nice op-ed; I also appreciate the communication strategy here—anticipating that SBF's sentencing will reignite discourse around SBF's ties to EA, and trying to elevate the discourse around that (in particular by highlighting the reforms EA has undertaken over the past 1.5 years). 

2
Lorenzo Buonanno
4m
You can use https://archive.is/ to read paywalled articles, depending on your ethical views on the matter
2
Kaleem
8m
I was going to suggest the same thing but I wanted to be able to read the article before pointing this out

This is a reading list on the long reflection and the closely related, more recently coined notions of ASI governancereflective governance and grand challenges.

I claim that this area outscores regular AI safety on importance[1] while being significantly...

Continue reading
1
Iyngkarran Kumar
2h
Great resource, thanks for putting this together!
4
Wei Dai
16h
Thanks, lots of interesting articles in this list that I missed despite my interest in this area. One suggestion I have is to add some studies of failed attempts at building/reforming institutions, otherwise one might get a skewed view of the topic. (Unfortunately I don't have specific readings to suggest.) A related topic you don't mention here (maybe due to lack of writings on it?) is maybe humanity should pause AI development and have a long (or even short!) reflection about what it wants to do next, e.g. resume AI development or do something else like subsidize intelligence enhancement (e.g. embryo selection) for everyone who wants it so more people can meaningfully participate in deciding the fate of our world. (I note that many topics on this reading list are impossible for most humans to fully understand, perhaps even with AI assistance.) This neglect is itself perhaps one of the most important puzzles of our time. With AGI very plausibly just a few years away, why aren't more people throwing money or time/effort at this cluster of problems just out of self interest? Why isn't there more intellectual/academic interest in these topics, many of which seem so intrinsically interesting to me?

This neglect is itself perhaps one of the most important puzzles of our time. With AGI very plausibly just a few years away, why aren't more people throwing money or time/effort at this cluster of problems just out of self interest? Why isn't there more intellectual/academic interest in these topics, many of which seem so intrinsically interesting to me?

I think all of:

  • Many people seem to believe in something like "AI will be a big deal, but the singularity is much further off (or will never happen)".
  • People treat the singularity in far mode even if they adm
... (read more)
13
6

If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.

Thank you, M, for sharing this with me & encouraging me to connect.

Continue reading

Questions designed to trip him up or teach him a lesson are emotionally tempting, but don't seem very useful to me. Better to ask him how he thinks practical stuff can be improved, or what he thinks particularly big mistakes of GiveWell or other EA orgs were in terms of funding decisions, not broad philosophy (we've all heard standard objections to consequentialism before.) I suspect he won't have any good suggestions, on the latter, but you never know.

12Answer by Jonathan Paulson5h
I am a GiveWell donor because I want to spend money to improve the world. Should I do something else with that money instead? If so, what?
2Answer by Oscar Delaney11h
My understanding is you are unsupportive of earning-to-give. I agree the trappings of expensive personal luxuries are both substantively bad (often) and poor optics. But the core idea that some people are very lucky and have the opportunity to earn huge amounts of money which they can (and should) then donate, and that this can be very morally valuable, seems right to me. My guess is that regardless of your critiques of specific charities (bednets, deworming, CATF) you still think there are morally important things to do with money. So what do you think of ETG - why is the central idea wrong (if you indeed think that)?
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
Jason commented on AIM Animal Initiatives 12m ago
73
24

TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.

AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...

Continue reading

I think in general, our research is pretty unusual in that we are quite willing to publish research that has a fairly limited number of hours put into it. Partly, this is due to our research not being aimed at external actors (e.g., convincing funders, the broader animal movement, other orgs) as much as aimed at people already fairly convinced on founding a charity and aimed at a quite specific question of what would be the best org to found. We do take an approach that is more accepting of errors, particularly ones that do not affect endline decisions con

... (read more)
3
Jason
34m
To the extent this view is both valid and widely-held, and the reports are public, it should be possible to identify at least some specific examples without compromising your anonymity. While I understand various valid reasons why you might not want to do that, I don't think it is appropriate for us to update on a claim like this from a non-established anonymous account without some sort of support.
6
Tyler Johnston
1h
For what it's worth, I have no affiliation with CE, yet I disagree with some of the empirical claims you make — I've never gotten the sense that CE has a bad reputation among animal advocacy researchers, nor is it clear to me that the charities you mentioned were bad ideas prior to launching. Then again, I might just not be in the know. But that's why I really wish this post was pointing at specific reasoning for these claims rather than just saying it's what other people think. If it's true that other people think it, I'd love to know why they think it! If there are factual errors in CE's research, it seems really important to flag them publicly. You even mention that the status quo for giving in the animal space (CE excepted) is "very bad already," which is huge if true given the amount of money at stake, and definitely worth sharing examples of what exactly has gone wrong.

I am following the advice of Aaron Gertler and writing a post about my job. 80000 hours has independent career path pages dedicated to getting an economics PhD and doing academic research, but the specifics of my personal experience may be of interest. Plus, it was fun ...

Continue reading

Hi Vasco, thanks for reading. And thanks for your dedication to animals :) I've seen a few of your posts on this topic.

If you think you'll be interested in economics PhD programs, I would encourage you to aim to apply for the next cycle (Dec '24/Jan '25). There's a lot of randomness in the process, and your grades will matter more than RA-experience, so I'd say go for it as soon as you can, given how long these programs are. If you don't get in anywhere, you can be applying for RA-ships in the meantime, and take one if that's your best option before trying... (read more)

James Özden posted a Quick Take 1h ago

Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team!

Director (Maternity Cover)
We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders.

Research and Communications Officer
We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles.

Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org

Continue reading

TL;DR: Global performance indicators (GPIs) compare countries' policy performance, encouraging competition and pressuring policymakers for reforms. While effective, creating GPIs carries risks such as public backlash. However, certain characteristics can mitigate these ...

Continue reading

Executive summary: Global performance indicators (GPIs) that rank jurisdictions on animal welfare policies could be an effective and low-cost tool to drive policy changes, if designed well to maximize impact and minimize risks.

Key points:

  1. GPIs can pressure policymakers to enact reforms by stimulating competition between jurisdictions and attracting media attention.
  2. Evidence suggests GPIs can influence policy in desired directions, at least in some contexts, though precise impact is hard to measure.
  3. Key risks include public/political backlash and policymakers
... (read more)

new report from Faunalytics shows promising avenues for collaboration between animal protection organizations and environmental groups - the study interviewed environmental groups in the U.S., China, and Brazil and found that many are open to working with the animal...

Continue reading

Executive summary: A new report from Faunalytics reveals promising opportunities for collaboration between animal protection organizations and environmental groups, with many environmental organizations open to or already working with animal advocates, especially on legal advocacy, education, and promoting plant-based diets.

Key points:

  1. Many environmental organizations, particularly those focused on conservation, sustainability, and deforestation, are receptive to collaborating with animal advocacy groups.
  2. Key areas for collaboration include legal action, pol
... (read more)