New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

This research report summarizes a new meta-analysis: Preventing Sexual Violence —A Behavioral Problem Without a Behaviorally-Informed Solution, on which we are coauthors along with Roni Porat, Ana P. Gantman, and Elizabeth Levy Paluck.

The vast majority of papers try to ...

Continue reading

Why did you chose 1986 as a starting point? Attitudes about sexual violence seem to have changed a lot since then, so I wonder if the potential staleness of the older studies outweighs the value of having more studies for the analysis. [Finding no meaningful differences based on study age would render this question moot.]

2
Akhil
26m
Thanks for this post. Apologies I have not had to read through in detail, but I would suggest that perhaps: 1. The search criteria that you have used has missed a significant number of papers within the field. Looking at the country distribution you posted, this becomes more obvious; I would suggest looking at the What Works papers that were produced several years ago, where quite extensive literature reviews were being conducted 2. I think you do acknowledge this weakly, but there is such wide-spanning heterogeneity in the studies that you have included (and the programs the use), that I think tighter sub-group analysis is needed to tease out meaningful conclusions 3. A lot of work in this space has been done in the last 5-6 years; whilst not a specific limitiation of your work, just something to bear in mind!

The Astral Codex Ten (ACX) Grants impact market is live on Manifund — invest in 50+ proposals across projects in biotech, AI alignment, education, climate, economics, social activism, chicken law, etc. You can now invest in projects that you think will produce great results...

Continue reading

At least in the simple theoretical case. Maybe in practice small-value projects don't get funded. 

Scott Alexander has stated that: "Since most people won’t create literally zero value, and I don’t want to be overwhelmed with requests to buy certificates for tiny amounts, I’m going to set a limit that I won’t buy certificates that I value at less than half their starting price." I'm not sure exactly what "starting price" means here, but one could envision a rule like this causing a lot of grants which the retrofunder would assign some non-trivial value... (read more)

Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than...

Continue reading

The decline of our available assets should disproportionately affect funding for GHW relative to GCR because we think that opportunities in our GHW portfolio vary less in terms of expected cost-effectiveness. That is, we think GHW opportunities are more closely clustered around the “bar” we use to define which grants meet our standards for cost-effectiveness.

I wonder whether the 2nd sentence above means you have cost-effectiveness estimates of your GHW grants. If so, I think it would be good if you shared them for transparency. I appreciate justifying well... (read more)

2
Vasco Grilo
30m
Thanks for the comment, Nick! I tend to agree. At least based on my experience, people at OP are reasonably responsive. Here are my success rates privately contacting people at OP ("successful attempts[1]"/"attempts[2]"): * All: 45.0 % (18/40). * Aaron Gertler: 100 % (1/1). * Ajeya Cotra: 0 (0/1). * Andrew Snyder-Beattie: 100 % (1/1). * Alexander Berger: 20 % (1/5). * Ben Stewart: 100 % (1/1). * Cash Callaghan: 0 (0/1). * Claire Zabel: 0 (0/3). * Damon Binder: 0 (0/3). * Derek Hopf: 100 % (2/2). * Harshdeep Singh: 0 (0/1). * Heather Youngs: 0 (0/2). * Holden Karnofsky: 100 % (3/3). * Jacob Trefethen: 0 (0/1). * James Snowden: 0 (0/1). * Jason Schukraft: 100 % (2/2). * Lewis Bollard: 75 % (3/4). * Luca Righetti: 50 % (1/2). * Luke Muehlhauser: 0 (0/1). * Matt Clancy: 0 (0/1). * Philip Zealley: 100 % (1/1). * Rossa O’Keeffe-O’Donovan: 0 (0/1). * Will Sorflaten: 100 % (2/2). 1. ^ At least 1 reply. 2. ^ Counting as a single attempt multiple ones respecting the same topic.
4
Vasco Grilo
2h
Thanks for sharing, Linda! I very much agree Open Phil breaking a promise to provide funding would be bad. However, I assume Open Phil asked about alternative sources of funding in the application, and I wonder whether the promise to provide funding was conditional on the other sources not being successful.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Tl;dr: One of the biggest problems facing any kind of collective action today is the fracturing of the information landscape. I propose a collective, issue-agnostic observatory with a mix of algorithmic and human moderation for the purposes of aggregating information, separate...

Continue reading
5
Henry Howard
3h
I think overall this post plays into a few common negative stereotypes of EA: Enthusiastic well-meaning people (sometimes with a grandiose LoTR reference username) proposing grand plans to solve an enormously complex problem without really acknowledging or understanding the nuance. Suggesting that we simply develop an algorithm to identify "high quality content" and that a combination of crowds and experts will reliably be able to identify factual vs non-factual information seems to completely miss the point of the problem, which is that both of these things are extremely difficult and that's why we have a disinformation crisis.

It is true that this is not likely to solve the disinformation crisis. It is also true that the successful implementation of such a platform would be quite difficult. However, there are reasons why I outlined the platform as I did:

  • Small online newsrooms like 404 media have recently come into existence with subscriber based models that allow them to produce high quality content while catering to specialised audiences. If the sufficient resources are there to attract high quality reporters (whom I note in the post perform a function that cannot be easily rep
... (read more)
1
Light_of_Illuvatar
4h
Hi, the general model for the platform would be something akin to a web-based news site (e.g. WIRED, Vox, etc.) and a subreddit combined. There's the human run in depth coverage part, where the work should be done to increase impartiality, but there's also the linklist part which allows community members to "float" content they find interesting without getting bogged down in writing it up, so to speak. The links shared will be opinionated, definitely,  but that should be mitigated by the human coverage, and the limitations of human coverage (speed of updates, long reading time) can hopefully be compensated by the linklist/subreddit portion of the site.

Cross-posted as there may be others interested in educating others about early-stage research fields on this forum. I am considering taking on additional course design projects over the next few months. Learn more about hiring me to consult on course design.

Introduction

...
Continue reading

What I had in mind was "shows up to all 8 discussion groups for the taught part of the course". I also didn't check this figure, so that was from memory.

True, there are lots of ways to define it (e.g. finishing the readings, completing the project, etc)

2
Linda Linsefors
17h
I do think AISF is a real improvement to the field. My apologies for not making this clear enough.   You mean MIRI's syllabus?  I don't remember what 80k's one looked like back in the days, but the one that is up not is not just "Go read a bunch of textbooks". I personally used CHAI's one and found it very useful. Also some times you should go read a bunch of text books. Textbooks are great. 

I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers...

Continue reading

Considering how much mud was being slung around the FTX collapse, "clearing CEA's name" and proving that no one there knew about the fraud seems not just like PR to me, but pretty important for getting the org back to a place where it’s able to meaningfully do its work.

Plus, that investigation is not the only thing mentioned in the reflection reform paragraph. The very next sentence also says CEA has "reinvested in donor due diligence, updated our conflict-of-interest policies and reformed the governance of our organization, replacing leadership on the board and the staff."

29
Owen Cotton-Barratt
5h
I think you're missing some important ground in between "reflection process" and "PR exercise". I can't speak for EV or other people then on the boards, but from my perspective the purpose of the legal investigation was primarily about helping to facilitate justified trust. Sam had by many been seen as a trusted EA leader, and had previously been on the board of CEA US. It seemed it wouldn't be unreasonable if people in EA (or even within EV) started worrying that leadership were covering things up. Having an external investigation was, although not a cheap signal to send, much cheaper in worlds where there was nothing to hide compared to worlds where we wanted to hide something; and internal trust is extremely important. Between that and wanting to be able to credibly signal to external stakeholders like the Charity Commission, I think general PR was a long way down the list of considerations.
1
trevor1
11h
Ah, my bad, I did a ctrl + f for "sam"! Glad that it was nothing.

I believe that doing EA community building, especially at top universities, can be a great early career move for certain people. It’s possible that not enough students or recent graduates are aware of the reasons why this could be a good option for them, so I wanted to lay out my thoughts in this post. My central claim is that running an EA or cause area group at a top university can provide very useful career capital for individuals in the early stages of their careers.

The specific work I’m referring to is currently funded through Open Philanthropy’s University Group Organiser Fellowship. This usually involves running an Effective Altruism or cause area (e.g. AI Safety) group at a university. Open Philanthropy provides funding for organisers working at least 10 hours per week, though in this post I’m mostly thinking of people doing this work full-time (or something close to that)[1]. My...

Continue reading

Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly...

Continue reading

Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHS's spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this year's budget per person will likely be significantly lower.

I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you! 

That's nice to read! But please don't feel guilty, I found it to be a very useful prompt to write up my thoughts on the matter. 

Summary

As the Soviet Union collapsed in 1991, the fate of its weapons of mass destruction (WMD) programs presented a new type of catastrophic risk: what would happen to all the nuclear, biological, and chemical weapons and materials, and the scientists who worked on them...

Continue reading

Executive summary: The Cooperative Threat Reduction Program, which aimed to secure and dismantle weapons of mass destruction in former Soviet states after 1991, succeeded due to the interpersonal skills, strategic leadership, and personal qualities of key individuals involved in its origins and implementation.

Key points:

  1. Preparatory academic and policy work in the 1980s by figures like David Hamburg and Jane Wales helped lay the groundwork for CTR's initial success.
  2. Interpersonal skills such as building trust, bringing people together across disciplines, men
... (read more)