New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

I believe that doing EA community building, especially at top universities, can be a great early career move for certain people. It’s possible that not enough students or recent graduates are aware of the reasons why this could be a good option for them, so I wanted to lay out my thoughts in this post. My central claim is that running an EA or cause area group at a top university can provide very useful career capital for individuals in the early stages of their careers.

The specific work I’m referring to is currently funded through Open Philanthropy’s University Group Organiser Fellowship. This usually involves running an Effective Altruism or cause area (e.g. AI Safety) group at a university. Open Philanthropy provides funding for organisers working at least 10 hours per week, though in this post I’m mostly thinking of people doing this work full-time (or something close to that)[1]. My...

Continue reading

Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly...

Continue reading

Thanks for sharing your thoughts! 

It's a pity you don't feel comfortable inviting people to the conference - that's the last thing we want to hear! 

So far our visual style for EAGxUtrecht hasn't been austere[1] so we'll think more about this. Normally, to avoid looking too fancy, I ask myself: would this be something the NHS would spend money on?

But I'm not sure how to balance the appearance of prudence with making things look attractive. Things that make me lean towards making things look attractive include:

  • This essay on the value of aesthetics to movements 
  • This SSC review, specifically the third reason Pease mentions for the Fabians' success
  • The early success of SMA and their choice to spend a lot on marketing and design
  • Things I've heard from friends who could really help EA, saying things like, "ugh, all this EA stuff looks the same/like it was made by a bunch of guys"

For what it's worth, the total budget this year is about half of what was spent in 2022, and we have the capacity for almost the same number of attendees (700 instead of 750). 

In case it's useful, here are some links that show the benefits of EAGx events. I admit they don't provide a slam-dunk case for cost-effectiveness, but they might be useful when talking to people about why we organise them: 

  • EAGx events seem to be a particularly cost-effective way of building the EA community, and we think the EA community has enormous potential to help build a better world. 
  • Open Philanthropy’s 2020 survey of people involved in longtermist priority work (a significant fraction of work in the EA community) found that about half of the impact that CEA had on respondents was via EAG and EAGx conferences.
  • Anecdotally, we regularly encounter community members who cite EAGx events as playing a key part in their EA journey. You can read some examples from CEA’s analysis

Thanks again for sharing your thoughts! I hope your pseudonymous account is helping you use the forum, although I definitely don't think you need to worry about looking dumb :)

  1. ^

    We're going for pink and fun instead. We're only going to spend a few hundred euros on graphic design. 

Hi James, I feel quite guilty for prompting you to write such a long, detailed, and persuasive response! Striving to find a balance between prudence and appeal seems to be the ideal goal. Using the NHS's spending habits as a heuristic to avoid extravagance seems smart (although I would not say that this should apply to other events!). Most importantly, I am relieved to learn that this year's budget per person will likely be significantly lower.

I totally agree that these events are invaluable. EAGs and EAGxs have been crucial in expanding my network and enhancing my impact and agency. However, as mentioned, I am concerned about perceptions. Having heard this I feel reassured, and I will see who I can invite! Thank you! 

That's nice to read! But please don't feel guilty, I found it to be a very useful prompt to write up my thoughts on the matter. 

Summary

As the Soviet Union collapsed in 1991, the fate of its weapons of mass destruction (WMD) programs presented a new type of catastrophic risk: what would happen to all the nuclear, biological, and chemical weapons and materials, and the scientists who worked on them...

Continue reading

Executive summary: The Cooperative Threat Reduction Program, which aimed to secure and dismantle weapons of mass destruction in former Soviet states after 1991, succeeded due to the interpersonal skills, strategic leadership, and personal qualities of key individuals involved in its origins and implementation.

Key points:

  1. Preparatory academic and policy work in the 1980s by figures like David Hamburg and Jane Wales helped lay the groundwork for CTR's initial success.
  2. Interpersonal skills such as building trust, bringing people together across disciplines, men
... (read more)
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Identity

In theory of mind, the question of how to define an "individual" is complicated. If you're not familiar with this area of philosophy, see Wait But Why's introduction.

I think most people in EA circles subscribe to the computational theory of mind, which means that...

Continue reading

Executive summary: Diversity-oriented theories of moral value, which place intrinsic value on the diversity of experiences, have significant implications for the effectiveness of interventions aimed at improving shrimp welfare in factory farming.

Key points:

  1. Computational theories of mind and identity suggest that the moral value of an individual depends on the uniqueness of their mental experiences.
  2. Shrimp likely have a limited number of meaningfully distinct mental experiences due to their small brain size.
  3. Interventions that improve the quality of life for
... (read more)
2
MichaelStJules
5h
If you don't care about where or when duplicate experiences exist, only their number, then not caring about duplicates at all gives you a fanatical wager against the universe having infinitely many moral patients, e.g. by being infinitely large spatially, going on forever in time, having infinitely many pocket universes. It would also give you a wager against the many-worlds interpretation of quantum mechanics, because there will be copies of you having identical experiences in (at least slightly) already physically distinct branches.
2
MichaelStJules
5h
Also, I'd guess most people who value diversity of experience mean that only for positive experiences. I doubt most would mean repeated bad experiences aren't as bad as diverse bad experiences, all else equal.

Cross-posted on LessWrong.

This article is the fourth in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific...

Continue reading

Executive summary: Current and proposed regulations require AI-generated content to be labeled and watermarked, but these lightweight methods have limitations in preventing misuse and ensuring accountability.

Key points:

  1. Labeling and watermarking AI-generated content informs users and enables tracing the source AI model.
  2. The US, China, and EU have proposed or enacted rules requiring conspicuous labeling and robust watermarking of AI content.
  3. Labeling and watermarking are lightweight methods with precedent, but compliance and effectiveness can vary.
  4. Labels and w
... (read more)

Tl;dr: One of the biggest problems facing any kind of collective action today is the fracturing of the information landscape. I propose a collective, issue-agnostic observatory with a mix of algorithmic and human moderation for the purposes of aggregating information, separate...

Continue reading

I think overall this post plays into a few common negative stereotypes of EA: Enthusiastic well-meaning people (sometimes with a grandiose LoTR reference username) proposing grand plans to solve an enormously complex problem without really acknowledging or understanding the nuance.

Suggesting that we simply develop an algorithm to identify "high quality content" and that a combination of crowds and experts will reliably be able to identify factual vs non-factual information seems to completely miss the point of the problem, which is that both of these things are extremely difficult and that's why we have a disinformation crisis.

2
Owen Cotton-Barratt
1h
I think it's great to think about what projects should maybe exist and then pitch them! Kudos to you for doing that; it seems potentially one of the highest-value activities on the Forum. I think that information flows are really important, and in principle projects like this could be really high-value already in the world today. Moreover I agree that the general area is likely to increase in importance as the impacts of language models are more widely felt. But details are going to matter a lot, and I'm left scratching my head a bit over this: * When I read the specific pitch here, I think I don't think that I have a clear enough picture of what kind of topics this is going to cover, and what audiences it will serve  * Is it best thought of like "Wikipedia, but for news"? Something more EA-focused than that? * You talk about the importance of having things that are just news, not advocacy * But it also sounds like most of what you're imagining is links to other sources of information * Most news sources at the moment come with some degree of opinionated views slanting how they're presented; presumably you're not going to exclude anything being linked just because of that? * If this impartiality is really important, would it maybe be better to more just collect the bare facts, rather than link to external articles? * This could be more efficient in information-per-word, as well as reducing spin
1
Light_of_Illuvatar
1h
Hi, the general model for the platform would be something akin to a web-based news site (e.g. WIRED, Vox, etc.) and a subreddit combined. There's the human run in depth coverage part, where the work should be done to increase impartiality, but there's also the linklist part which allows community members to "float" content they find interesting without getting bogged down in writing it up, so to speak. The links shared will be opinionated, definitely,  but that should be mitigated by the human coverage, and the limitations of human coverage (speed of updates, long reading time) can hopefully be compensated by the linklist/subreddit portion of the site.
17
8

If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.

Thank you, M, for sharing this with me & encouraging me to connect.

Continue reading
Answer by titotalMar 29, 20244
1
0

Your article concludes with an anecdote about your surfer friend Aaron who befriended a village and helped upgrade their water supply.  Is this meant to be an alternative model of philanthropy?  Would you really encourage people to do this on a large scale? How would you avoid  this turning this into voluntourism, where poor people in the third world have to pretend to befriend wannabe white saviours in exchange for money? 

2
huw
9h
I thought he spelled out his ETG criticism quite clearly in the article, so I’ll paraphrase what I imbibed here. I think he would argue that, for the same person in the same job, donating X% of their money is a better thing. However, the ETG ethos that has hung around in the community promotes seeking out extremely high-paying jobs in order to donate even more money. These jobs often bring about more harms in turn (both in an absolute sense but possibly also to the point that ETG is net-negative, for example in the case of SBF), especially if we live in an economic system that rewards behaviour that profits off negative externalities.

I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers...

Continue reading

I think you're missing some important ground in between "reflection process" and "PR exercise".

I can't speak for EV or other people then on the boards, but from my perspective the purpose of the legal investigation was primarily about helping to facilitate justified trust. Sam had by many been seen as a trusted EA leader, and had previously been on the board of CEA US. It seemed it wouldn't be unreasonable if people in EA (or even within EV) started worrying that leadership were covering things up. Having an external investigation was, although not a cheap... (read more)

1
trevor1
8h
If EA currently 1. is in the middle of a Dark Forest (e.g. news outlets systematically following emergent consumer interest in criticizing EA and everything it stands for) 2. perceives themselves as currently being in the middle of a dark forest or at risk of already being in a dark forest (which might be hard to evaluate e.g. due to the dynamics described in Social Dark Matter)  3. expects to enter a dark forest at some point in the near future (or the world around them to turn into a dark forest e.g. if China invades Taiwan and a wide variety of norms go out the window) then I imagine that it would be pretty difficult to design institutional constraints that are resilient to observation and exploitation by a wide variety of possible adversaries, and balancing those same institutional constraints to simultaneously be visible and credible/satisfying to a wide variety of observers?
1
trevor1
8h
Ah, my bad, I did a ctrl + f for "sam"! Glad that it was nothing.
185
11

This post was partly inspired by, and shares some themes with, this Joe Carlsmith post. My post (unsurprisingly) expresses fewer concepts with less clarity and resonance, but is hopefully of some value regardless.

Content warning: description of animal death.

I live in a ...

Continue reading

My prior would be that unless you check extremely frequently, this sounds like a lot of suffering. But not sure about the other options.

1
Tiresias
18h
This post was moving, thank you for writing it. I have dealt with a similar situation, and found it impossible. I've dealt with that impossibility by trying to justify what I've done, and absolve myself. Your post is forthright: you killed the moths. We can move on from it, but we don't need to rationalize it.