New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles. However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness. While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

Wei Dai commented on Moral ~realism 1h ago

Summary: Even from an anti-realist stance on morality, there are various reasons we might expect moral convergence in practice.

[Largely written two years ago; cleaned up for draft amnesty week. The ideas benefited from comments and conversations with many people; errors...

Continue reading

Then I think for practical decision-making purposes we should apply a heavy discount to world A) — in that world, what everyone else would eventually want isn’t all that close to what I would eventually want. Moreover what me-of-tomorrow would eventually want probably isn’t all that close to what me-of-today would eventually want. So it’s much much less likely that the world we end up with even if we save it is close to the ideal one by my lights. Moreover, even though these worlds possibly differ significantly, I don’t feel like from my present position

... (read more)

Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than...

Continue reading

I really appreciated this report, it seemed one of the most honest and open communications to come out of Open Philanthropy, and it helped me connect with your priorities and vision. A couple of specific things I liked.

I appreciated the comment about the Wytham Abby purchase, recognising the flow on effects Open Phil decisions can have on the wider community, and even just acknowledging a mistake - something which is both difficult and uncommon in leadership.

"But I still think I personally made a mistake in not objecting to this grant back when the initial... (read more)

23
Vasco Grilo
14h
Hello again Alex, You discuss the allocation of funds across your 2 main areas, global health and wellbeing (GHW) and global catastrophic risks (GCR), but (as before) you do not say anything about the allocation across animal and human interventions in the GHW portfolio. I assume you do not think the funding going towards animal welfare interventions should be greatly increased, but I would say you should at least be transparent about your views. For reference, I estimate the cost-effectiveness of corporate campaigns for chicken welfare is 13.6 DALY/$ (= 0.01*1.37*10^3), i.e. 680 (= 13.6/0.02) times Open Philanthropy's bar. I got that multiplying: * The cost-effectiveness of GiveWell's top charities of 0.01 DALY/$ (50 DALY per 5 k$), which is half of Open Philanthropy's bar of 0.02 DALY/$. * My estimate for the ratio between cost-effectiveness of corporate campaigns for chicken welfare and GiveWell's top charities of 1.37 k (= 1.71*10^3/0.682*2.73/5): * I calculated corporate campaigns for broiler welfare increase neaterm welfare 1.71 k times as cost-effectively as the lowest cost to save a life among GiveWell’s top charities then of 3.5 k$, respecting a cost-effectiveness of 0.286 life/k$ (= 1/(3.5*10^3)). * The current mean reciprocal of the cost to save a life of GiveWell’s 4 top charities is 0.195 life/k$ (= (3*1/5 + 1/5.5)*10^-3/4), i.e. 68.2 % (= 0.195/0.286) as high as the cost-effectiveness I just mentioned. * The ratio of 1.71 k in the 1st bullet respects campaigns for broiler welfare, but Saulius estimated ones for chicken welfare (broilers or hens) affect 2.73 (= 41/15) as many chicken-years. * OP thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis”.
23
Vasco Grilo
14h
Thanks for the update, Alex! Could you elabotate on the influence of Cari and Dustin on your grantmaking (see what I have highlighted below), ideally by giving concrete examples?

This is a cross-post from the CGD Blog. For the original post and downloadable Note please visit: https://www.cgdev.org/publication/1-trillion-paradox-why-reforming-research-publishing-should-be-global-priority .

----

Our research system is a perplexing paradox. Each year, approximately $1 trillion of public funds are spent on research worldwide. Whole careers are spent making incremental improvements to research methods. Hundreds of millions of dollars are spent on a single clinical trial. And yet, the global system for sharing research results is a costly mess. Rooted in antiquated journal structures and marred by market failures, our research systems prioritise profit at the expense of accessibility, equity, and affordability, hindering our ability to fully reap the benefits of research. 

A prior CGD blog argued that research reform is a critical issue for global development and...

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles.

However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness.

While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.

Continue reading
9
3

If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.

Thank you, M, for sharing this with me & encouraging me to connect.

Continue reading

My understanding is you are unsupportive of earning-to-give. I agree the trappings of expensive personal luxuries are both substantively bad (often) and poor optics. But the core idea that some people are very lucky and have the opportunity to earn huge amounts of money which they can (and should) then donate, and that this can be very morally valuable, seems right to me. My guess is that regardless of your critiques of specific charities (bednets, deworming, CATF) you still think there are morally important things to do with money. So what do you think of ETG - why is the central idea wrong (if you indeed think that)?

5Answer by Rebecca8h
1. What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number? 2. Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse - defining terms unique is very common. 3. Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional? 1. Does this mean you think prediction markets don’t end up working in practice to hold people to their track records of mid-probability predictions?
13Answer by AnonymousTurtle10h
1. What do you donate to? 2. What is your take on GiveDirectly? 3. Do you think Mariam is not a "real, flesh-and-blood human", since you never met her? 4. Do you think that spending money surfing and travelling the world while millions are starving could be considered by some a suboptimal use of capital?

This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three...

Continue reading

Here's the talk version for anyone who finds it easier to listen to videos: 

4
Nick K.
17h
Just noting that these are possibly much stronger claims than "AGI will be able to completely disempower humanity" (depending on how hard it is to solve cold fusion a-posteriori).
29
Owen Cotton-Barratt
17h
I support people poking at the foundations of these arguments. And I especially appreciated the discussion of bottlenecks, which I think is an important topic and often brushed aside in these discussions. That said, I found that this didn't really speak to the reasons I find most compelling in favour of something like the singularity hypothesis. Thorstad says in the second blog post: I think this is wrong. (Though the paper itself avoids making the same mistake.) There are lots of coherent models where the effective research output of the AI systems is growing faster than the difficulty of increasing intelligence, leading to accelerating improvements despite each doubling of intelligence getting harder than the last. These are closely analogous to the models which can (depending on some parameter choices) produce a singularity in economic growth by assuming endogenous technological growth. In general I agree with Thorstad that the notion of "intelligence" is not pinned down enough to build tight arguments on it. But I think that he goes too far in inferring that the arguments aren't there. Rather I think that the strongest versions of the arguments don't directly route through an analysis of intelligence, but something more like the economic analysis. If further investments in AI research drive the price-per-unit-of-researcher-year-equivalent down in fast enough, this could lead to hyperbolic increases in the amount of effective research progress, and this could in turn lead to rapid increases in intelligence -- however one measures that. I agree that this isn't enough to establish that things will be "orders of magnitude smarter than humans", but for practical purposes the upshot that "there will be orders of magnitude more effective intellectual labour from AI than from humans" does a great deal of work. On the argument that extraordinary claims require extraordinary evidence, I'd have been interested to see Thorstad's takes on the analyses which suggest that

This paper was published as a GPI working paper in March 2024.

Abstract

The Fading Qualia Argument is perhaps the strongest argument supporting the view that in order for a system to be conscious, it does not need to be made of anything in particular, so long as its internal...

Continue reading

This doesn’t seem so different from p-zombies, and probably some moral thought experiments.

I'm not sure what you mean here. That the simulation argument doesn't seem different from those? Or that the argument that 'we have no evidence of their existence and therefore shouldn't update on speculation about them' is comparable to what I'm saying about the simulation hypothesis? 

If the latter, fwiw, I feel the same way about p-zombies and (other) thought experiments. They are a terrible methodology for reasoning about anything, very occasionally the only ... (read more)

TLDR

Manifold is hosting a festival for prediction markets: Manifest 2024! We’ll have serious talks, attendee-run workshops, and fun side events over the weekend. Chat with special guests like Nate Silver, Scott Alexander, Robin Hanson, Dwarkesh Patel, Cate Hall, and...

Continue reading
8
Ben Stewart
8h
Why do you think Simon and Malcolm Collins are good speakers for this conference? 
4
Austin
8h
Hey Ben! I'm guessing you're asking because the Collins's don't seem particularly on-topic for the conference? For Manifest, we'll typically invite a range of speakers & guests, some of whom don't have strong pre-existing connections to forecasting; perhaps they have interesting things to share from outside the realm of forecasting, or are otherwise thinkers we respect, and are curious to learn more about prediction markets.  (Though in this specific case, Simone and Malcolm have published a great book covering different forms of governance, which is topical to our interest in futarchy; and I believe their education nonprofit makes use of internal prediction markets for predicting student outcomes!)

Thanks, yeah I'm surprised the upsides outweigh the downsides but not my conference [own views]

  1. LGP year in review
  2. Highlights
  3. Learnings
  4. Free Content
  5. Game system
  6. Experts needed

But first, a wholesome photo of us on 'launch day' over a year ago now.

 

The Long Game Project (LGP) aims to improve institutional decision-making (IIDM), planning, forecasting and creative thinking using tabletop exercises (TTXs) and scenarios.


We aim to improve the way leaders practise decision-making and thinking when the stakes are low to prepare for when the stakes are high. To help challenge judgments, identify mental mindsets, stimulate creativity, manage uncertainty, and improve dynamic awareness. We are improving how table-topping is done.

 

1. First year by the numbers

 

Large events run 5

Biggest game 51 players

Total Marketcap of clients $3.4B

Scenarios designed 135

Net Promotor Score 74.9 🤯

(Sanjana tells me this net promotor score means that when people use us, they are very likely to recommend us...

Continue reading