New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+
invalid input syntax for type timestamp with time zone: "0NaN-NaN-NaNTNaN:NaN:NaN.NaN+NaN:NaN"

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

81
26

TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.

AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...

Continue reading
4
Elizabeth
9h
Most of these are just "people in space knew this wouldn't work". Could you share more specific criticisms? As Aidan said, the biggest successes come from projects no one else would do, so without more information that seems like a very weak criticism. 
4
Jason
17h
Do you think there are additional steps you could/should take to make this philosophy / these limitations clearer to would-be to those who come across your reports? I strongly support more transparency and more release of materials (including less polished work product), but I think it is essential that the would-be secondary user is well aware of the limitations. This could include (e.g.) noting the amount of time spent on the report, the intended audience and use case for the report, the amount of reliance upon which you intend that audience to place on the report, any additional research you expect that intended audience to take before relying on the report, and the presence of any significant issues / weaknesses that may be of particular concern to either the intended audience or anticipated secondary users. If you specifically do not intend to correct any errors discovered after a certain time (e.g., after the idea was used or removed from recommended options), it would probably be good to state that as well.

Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.

– – 

Quality of our reports

I would like to push back a bit on Joey's response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pl... (read more)

This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three...

Continue reading
2
David Thorstad
12h
Here's a gentle introduction to the kinds of worries people have (https://spectrum.ieee.org/power-problems-might-drive-chip-specialization). Of the cited references "the chips are down for moore's law" is probably best on this issue, but a little longer/harder. There's plenty of literature on problems with heat dissipation if you search the academic literature. I can dig up references on energy if you want, but with Sam Altman saying we need a fundamental energy revolution even to get to AGI, is there really much controversy over the idea that we'll need a lot of energy to get to superintelligence? 
4
David Thorstad
12h
Ah - that comes from the discontinuity claim. If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.  (The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that's harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement). 

As you write: 

The result will be a singularity, understood as a fundamental discontinuity in human history beyond which our fate depends largely on how we interact with artificial agents

The discontinuity is a result of humans no longer being the smartest agents in the world, and no longer being in control of our own fate. After this point, we've entered an event horizon where the output is almost entirely unforeseeable. 

If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000

If, a... (read more)

Identity

In theory of mind, the question of how to define an "individual" is complicated. If you're not familiar with this area of philosophy, see Wait But Why's introduction.

I think most people in EA circles subscribe to the computational theory of mind, which means that...

Continue reading

If you don't care about where or when duplicate experiences exist, only their number, then not caring about duplicates at all gives you a fanatical wager against the universe having infinitely many moral patients, e.g. by being infinitely large spatially, going on forever in time, having infinitely many pocket universes.

It would also give you a wager against the many-worlds interpretation of quantum mechanics, because there will be copies of you having identical experiences in (at least slightly) already physically distinct branches.

2
MichaelStJules
1h
Also, I'd guess most people who value diversity of experience mean that only for positive experiences. I doubt most would mean repeated bad experiences aren't as bad as diverse bad experiences, all else equal.
4
Arepo
2h
This. I'm imagine some Abrodolph Lincoler-esque character - Abronard Willter maybe - putting me in a brazen bull and cooing 'Don't worry, this will all be over soon. I'm going to create 10billion more of you also on a rack, and the fact that I continue to torture you personally will barely matter.'
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

"The UN Secretary-General's AI Advisory Body has launched its Interim Report: Governing AI for Humanity. The report calls for a closer alignment between international norms and how AI is developed and rolled out. The central piece of the report is a proposal to strengthen international governance of AI by carrying out seven critical functions such as horizon scanning for risks and supporting international collaboration on data, and computing capacity and talent to reach the Sustainable Development Goals (SDGs). It also includes recommendations to enhance accountability and ensure an equitable voice for all countries."

Would be interested to know people's take on this. The final report, revised to take account of the submissions, will be presented in August 2024, in time for the UN's Summit of the Future in September. Both individual submissions and those on behalf of organisations are welcomed...

Continue reading

Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than...

Continue reading
29
Linda Linsefors
11h
From the linked report: Here's a story I recently heard from someone I trust: An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF). When OpenPhil found out about this, they rolled back the amount of money the would pay to this project, buy the exact amount that this project was promised by SFF, rendering the SFF grant meaningless.  I don't think this is ok behaviour, and definitely not what you do to get more funders involved.    Is some context I'm missing here? Or has there been some misunderstanding? Or is this as bad as it looks?   I'm not going to name either the source or the project publicly (they can name themselves if they want to), since I don't want to get anyone else in to trouble, or risk their chances of getting OpenPhil funding. I also want to make clear that I'm writing this on my own initiativ.  There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all, and I think this sort of things are worth calling out. 

I understand posting this here, but for following up specific cases like this, especially second hand I think it's better to first contact OpenPhil before airing it publicly. Like you mentioned there is likely to be much context here we don't have, and it's hard to have a public discussion without most of the context.

"There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all"

That's a fair comment I understand the importance of ov... (read more)

8
Ben_West
15h
Thanks for writing and sharing this Alexander – I thought it was an unusually helpful and transparent post.

I’m Emma from the Communications team at the Centre for Effective Altruism (CEA). I want to flag a few media items related to EA that have come out recently or will be coming out soon, given they’ll touch on topics—like FTX—that I expect will be of interest to Forum readers...

Continue reading

If EA currently

  1. is in the middle of a Dark Forest (e.g. news outlets systematically following emergent consumer interest in criticizing EA and everything it stands for)
  2. perceives themselves as currently being in the middle of a dark forest or at risk of already being in a dark forest (which might be hard to evaluate e.g. due to the dynamics described in Social Dark Matter
  3. expects to enter a dark forest at some point in the near future (or the world around them to turn into a dark forest e.g. if China invades Taiwan and a wide variety of norms go out th
... (read more)
1
2ndRichter
8h
It’s in the image on the lower far right— “After Bankman-Fried, effective altruists won’t be fooled again, Opinion by Zach Robinson.”
1
trevor1
5h
Ah, my bad, I did a ctrl + f for "sam"! Glad that it was nothing.
17
7

If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.

Thank you, M, for sharing this with me & encouraging me to connect.

Continue reading

I thought he spelled out his ETG criticism quite clearly in the article, so I’ll paraphrase what I imbibed here.

I think he would argue that, for the same person in the same job, donating X% of their money is a better thing. However, the ETG ethos that has hung around in the community promotes seeking out extremely high-paying jobs in order to donate even more money. These jobs often bring about more harms in turn (both in an absolute sense but possibly also to the point that ETG is net-negative, for example in the case of SBF), especially if we live in an economic system that rewards behaviour that profits off negative externalities.

5
David Mathers
17h
Questions designed to trip him up or teach him a lesson are emotionally tempting, but don't seem very useful to me. Better to ask him how he thinks practical stuff can be improved, or what he thinks particularly big mistakes of GiveWell or other EA orgs were in terms of funding decisions, not broad philosophy (we've all heard standard objections to consequentialism before.) I suspect he won't have any good suggestions, on the latter, but you never know.

This post was cross-posted from the substack Thing of Things with the permission of the author.


In defense of trying things out

The Economist recently published an article, “How poor Kenyans became economists’ guinea pigs,” which critiques development economists’ use of randomized...

Continue reading
4
Arepo
13h
The Copenhagen interpretation of ethics strikes again.
9
huw
11h
It's wild for a news organisation that routinely witnesses and reports on tragedies without intervening (as is standard journalistic practice, for good reason) to not recognise it when someone else does it.

I hadn't even thought of that! Yeah, that's some pretty impressive hypocrisy.

Share your information in this thread if you are looking for full-time, part-time, or limited project work in EA causes[1]!

We’d like to help people in EA find impactful work, so we’ve set up this thread, and another called Who's hiring? (we did this last in 2022[2]).

Consider...

Continue reading
1Answer by Seth Ariel Green15h
TLDR: I write meta-analyses on a contract basis, e.g. here, here, and here. If you want to commission a meta-analysis, and get a co-authored paper to boot, I'd love to hear from you.  Skills & background: I am a nonresident fellow at the Kahneman-Treisman Center at Princeton and an affiliate at the Humane and Sustainable Food Lab at Stanford. Previously I worked at Glo Foundation, Riskified, and Code Ocean. Location/remote: Brooklyn. Resume/CV/LinkedIn: see here. Email/contact: setgree at gmail dot com Other notes: I'm reasonably subject-agnostic, though my expertise is in behavioral science research.