New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

This post was cross-posted from the substack Thing of Things with the permission of the author.


In defense of trying things out

The Economist recently published an article, “How poor Kenyans became economists’ guinea pigs,” which critiques development economists’ use of randomized...

Continue reading

Good post, more detailed thoughts later, but one nitpick:

As far as I can tell, the deworming project being “one of the most successful RCTs to date” is just wrong. There is widespread disagreement about what we can conclude about deworming from the available evidence, with many respected academics saying that deworming has no effect on education at all. Many RCTs show a much larger effect.

I don't think "most successful RCT" is supposed to mean "most effective intervention" but rather "most influential RCT"; deworming has been picked up by a bunch of NGOs and governments after Miguel and Kremer, plausibly because of that study.

(Conflict note: I know and like Ted Miguel)

23
35

Welcome! Use this thread to introduce yourself or ask questions about anything that confuses you. (For inspiration, you can see the last open thread here.)

Get started on the EA Forum

The "Guide to norms on the Forum" shares more about that discussions we'd like to see on...

Continue reading

Hello everyone, my name is Manik. I joined the forum a month ago and am very excited to be here.

I am a business and technology consultant in the UK, collaborating with clients on business problems across multiple industries and domains. I have 3 years of software development experience after Engineering Bachelor's and 6 years of business and technology consulting experience after MBA. I am exploring ways to utilize my skill set to transition into a career which can create a bigger impact. 

Over the last few years, I developed ideas about how I want to ... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but it is also a question of more general interest. The last time this question was posted, it got some great responses. 

This post is a companion post for What posts are you thinking...

Continue reading

I was originally going to write an essay based on this prompt but I don't think I actually understand the Epicurean view well enough to do it justice. So instead, here's a quick list of what seem to me to be the implications. I don't exactly agree with the Epicurean view but I do tend to believe that death in itself isn't bad, it's only bad in that it prevents you from having future good experiences.

  1. Metrics like "$3000 per life saved" don't really make sense.
    • I avoid referencing dollars-per-life-saved when I'm being rigorous. I might use them when speak
... (read more)

Recently I got published an op-ed in The Crimson advocating, sort of, for an Earning to Give strategy.

The Crimson is widely read among Harvard students, and its content runs through many circles — not just those who care about student journalism.

I thought the piece was ...

Continue reading

Great post!
Do note that given the context and background, a lot of your peers are probably going to be nudged towards charitable ideas. I would want to be generally mindful that you are doing things that have counterfactual impacts while also taking into account the value of your own time and potential to do good.

I encourage you to also be cognizant of not epistemically taking over other people's world models with something like "AI is going to kill us all" - I think an uncomfortable amount of space inadvertently and unknowingly does this and is one of the key reasons why I never started an EA group at my university.

TL;DR: Global performance indicators (GPIs) compare countries' policy performance, encouraging competition and pressuring policymakers for reforms. While effective, creating GPIs carries risks such as public backlash. However, certain characteristics can mitigate these ...

Continue reading

I'm a big fan of these intervention reports. They're not directly relevant to anything I'm working on right now so I'm only skimming them but they seem high quality to me. I especially appreciate how you both draw on relevant social science external to the movement, and more anecdotal evidence and reasoning specific to animal advocacy.

When you summarise the studies, I'd find it more helpful if you summarised the key evidence rather than their all-things-considered views.

E.g. in the cost-effectiveness section you mention that costs are low, seeming to assum... (read more)

1
SummaryBot
6h
Executive summary: Global performance indicators (GPIs) that rank jurisdictions on animal welfare policies could be an effective and low-cost tool to drive policy changes, if designed well to maximize impact and minimize risks. Key points: 1. GPIs can pressure policymakers to enact reforms by stimulating competition between jurisdictions and attracting media attention. 2. Evidence suggests GPIs can influence policy in desired directions, at least in some contexts, though precise impact is hard to measure. 3. Key risks include public/political backlash and policymakers gaming the system, but these can be mitigated through careful GPI design. 4. Effective GPIs should focus on actionable policies, be updated regularly, come from respected independent sources, and frame criticism respectfully. 5. A small "GPI squad" producing targeted animal welfare GPIs could potentially be highly cost-effective for driving policy changes. 6. After a trial period, a more rigorous cost-effectiveness analysis could determine if the GPI approach is worth continuing at scale.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers. 

  • The CEO of CEA, @Zachary Robinson, wrote an op-ed that came out today addressing Sam Bankman-Fried and the continuing value of EA. (Read here)
  • @William_MacAskill will appear on two podcasts and will discuss FTX: Clearer Thinking with Spencer Greenberg and the Making Sense Podcast with Sam Harris.
    • The podcast episode with Sam Harris will likely be released next week and is aimed at a general audience.
    • The podcast episode with Spencer Greenberg will likely be released in two weeks and is aimed at people more familiar with the EA movement.

I’ll add links for these episodes once they become available and plan to update ...

Continue reading

What is this?

After a running career[1] across marathons, 50K, 50-mile, 100K, and 100-mile distance events over the past eleven years, I'm tackling the 200-mile distance at the Tahoe 200 from June 14-18 this year. 

It's a bit of a ridiculous, silly challenge. It's...

Continue reading

Thanks, Philippe! Good luck at Boston!! I wanted to do it this year, but it didn't work out with my schedule. 

This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three...

Continue reading
2
ElliotJDavies
2h
  * I'm not sure why "prolonged period" or "sustained" was used here? * I am also not sure what is meant by prolonged period? 5 years? 100 years?  * For the answer to the above, why do you believe would this be required?  Just to help nail down the crux here, I don't see why more than a few days of an intelligence explosion is required for a singularity event.  

I feel this claim is disconnected with the definition of the singularity given in the paper: 

The singularity hypothesis begins with the supposition that artificial agents will gain the ability to improve their own intelligence. From there, it is claimed that the intelligence of artificial agents will grow at a rapidly accelerating rate, producing an intelligence explosion in which artificial agents quickly become orders of magnitude more intelligent than their human creators. The result will be a singularity, understood as a fundamental discontinuity

... (read more)
2
ElliotJDavies
2h
I'm not sure I understand this claim, and I can't see that it's supported by the cited paper.  Is the claim that energy costs have increased faster than computation? This would be cruxy, but it would also be incorrect.