New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
Reflecting on the upcoming EAGx event in Utrecht, I find myself both excited and cautiously optimistic about its potential to further grow the Dutch EA community. My experiences from the last EAGX in the Netherlands marked a pivotal moment in my own EA journey (significantly grounding it locally) and boosted the community's growth. I think this event also contributed to the growth of the 10% club and the founding of the School for Moral Ambition this year, highlighting the Netherlands as fertile ground for EA principles. However, I'm less inclined to view the upcoming event as an opportunity to introduce proto-EAs. Recalling the previous Rotterdam edition's perceived expense, I'm concerned that the cost may deter potential newcomers, especially given the feedback I've heard regarding its perceived extravagance. I think we all understand why these events are worth our charitable Euros, but I have a hard time explaining that to newcomers who are attracted to EA for its (perceived) efficiency/effectiveness. While the funding landscape may have changed (and this problem may have solved itself through that), I think it remains crucial to consider the aesthetics of events like these where the goal is in part to welcome new members into our community.
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

Bostrom’s new book is out today in hardcover and Kindle in the USA, and on Kindle in the UK.

Description:

A greyhound catching the mechanical lure—what would he actually do with it? Has he given this any thought?

Bostrom’s previous book, Superintelligence: Paths, Dangers, ...

Continue reading

I'm wondering what Nick Bostrom's p(doom) currently is, given the subject of this book. He said 9 years ago in his lecture on his book Superintelligence "less than 50% risk of doom". In this interview 4 months ago he said that it's good there has been more focus on risks in recent times, but there's still slightly less focus on the risks than what is optimal, but he wants to focus on the upsides because he fears we might "overshoot" and not build AGI at all which would be tragic in his opinion. So it seems he thinks the risk is less than it used to be beca... (read more)

If anyone wants to see what making EA enormous might look like, check out Rutger Bregmans' School for Moral Ambition (SMA). 

It isn't an EA project (and his accompanying book has a chapter on EA that is quite critical), but the inspiration is clear and I'm sure there...

Continue reading

Thanks! IIRC, we focused on it substantially because a lot of the sign ups for our programmes (e.g. online course) were coming from LinkedIn even when we hadn't put much effort into it. The number of sign ups and the proportion attributed to LinkedIn grew as we put more effort into it. This was mostly the work of our wonderful Marketing Manager, Ana. I don't have access to recent data or information about how it's gone to make much of a call on whether it was worth it, relative to other possible uses of our/Ana's time.

Very interesting! We have made exactly the same observation so we’ve started investing in it more, but we’re still learning how best to go about this.

TLDR

Manifold is hosting a festival for prediction markets: Manifest 2024! We’ll have serious talks, attendee-run workshops, and fun side events over the weekend. Chat with special guests like Nate Silver, Scott Alexander, Robin Hanson, Dwarkesh Patel, Cate Hall, and...

Continue reading
4
Austin
13h
Hey Ben! I'm guessing you're asking because the Collins's don't seem particularly on-topic for the conference? For Manifest, we'll typically invite a range of speakers & guests, some of whom don't have strong pre-existing connections to forecasting; perhaps they have interesting things to share from outside the realm of forecasting, or are otherwise thinkers we respect, and are curious to learn more about prediction markets.  (Though in this specific case, Simone and Malcolm have published a great book covering different forms of governance, which is topical to our interest in futarchy; and I believe their education nonprofit makes use of internal prediction markets for predicting student outcomes!)
8
Ben Stewart
13h
Thanks, yeah I'm surprised the upsides outweigh the downsides but not my conference [own views]

I'd like to second Ben and make explicit the concern about platforming ideologues whose public reputation is seen as pro-eugenics.

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
weeatquince commented on AIM Animal Initiatives 43m ago
73
21

TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.

AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting...

Continue reading

I went though the old emails today and I am happy that my description accurately captured what happened and that everything I said can be backed up.

6
KarolinaSarek
2h
Thanks for clarifying! We always have an expert view section in the report, and often consult animal science specialists, but it is possible we missed something. Could you tell me where specifically we made a mistake regarding animal science that could have changed the recommendation? I want to look into it, to fact-check it, and if it is right not to make this mistake in the future. 
7
mildlyanonymous
3h
I think this is overly simplifying of something a lot more complex, and I’m surprised it’s a justification you use for this. Of course on some level what you’re saying is correct in many cases. But imagine you recommend a global health charity to be launched. GiveWell says “you’re misinterpreting some critical evidence, and this isn’t as impactful as you think”. Charities on the ground say “this will impact our existing work, so try doing it this other way”. You launch the intervention anyway. The founders immediately get the same feedback, including from trying the intervention, then pivot to coordinating more and aligning with external experts. This seems much more analogous to what happens in the animal space, and it seems absolutely like a good indicator that people were skeptical. Charities aren’t for-profits, who exist in a vacuum of their own profitability. They are part of a broader ecosystem.

Welcome! Use this thread to introduce yourself or ask questions about anything that confuses you. 

PS- this thread is usually entitled "Open thread", but I'm experimenting with a more descriptive title this time. 

Get started on the EA Forum

The "Guide to norms on the Forum" shares more about the kind of discussions we'd like to see on the Forum, and when the moderation team intervenes. For resources that can help you learn about effective altruism, check this list of links

1. Introduce yourself

If you'd like, share how you became interested in effective altruism, what causes you work on and prioritize, and other fun facts about yourself, in the comments below (For inspiration, you can see the last open thread here). You can also add this information to your Forum bio to help other Forum users get to know you. 

You can share photos as well as Gifs and videos in your posts
...
Continue reading

This post was cross-posted by the Forum team with the permission of the author. The author may not see or respond to comments on this post.


Greetings from Shrimp Welfare Project!

Our team is excited to announce the launch of our new webpage dedicated to the Humane Slaughter Initiative! This initiative aims to revolutionise the way shrimps are stunned prior to slaughter and pave the way for the future of ethical shrimp production. Learn more about this new page below.

In a newly published article on our website highlighting the urgency of responsible pond management, findings revealed alarmingly toxic hydrogen sulphide levels in shrimp ponds in India's Serepalem village, prompting a successful sludge removal intervention that significantly improved conditions - underscoring the imperative of sustainable practices for shrimp welfare and farm productivity.

Tamar Stelling wrote a piece for De Correspondent...

Continue reading

Gathering some notes on private COVID vaccine availability in the UK.

News coverage:

It sounds like there's been a licensing change allowing provision of the vaccine outside the NHS as of March 2024 (ish). Pharmadoctor is a company that supplies pharmacies and has been putting about the word that they'll soon be able to supply them with vaccine doses for private sale -- most media coverage I found names them specifically. However, the pharmacies themselves are responsible for setting the price and managing bookings or whatever. All Pharmadoctor does for the end user is tell you which pharmacies they are supplying and give you the following pricing guidance:

Comirnaty Omicron XBB.1.5 (Pfizer/BioNTech) £75-£85

Nuvaxovid XBB.1.5 (Novavax) £45-£55 (update: estimated availability from w/c 22/04/2024)

Some places offering bookings:

  • Rose Pharmacy (Deptford, London) replied to my e-mail on 21st March saying they would offer Pfizer for £80 in the next week or so, but didn't have a price for Novavax yet, which they expected to order towards the end of April.
  • JP Pharmacy (Camden High St, London) offers Pfizer for £85
  • Fleet Street Clinic (London), £95 "initial price" for the updated Pfizer vaccine.
  • Doctorcall (at-home service), which vaccine not specified, £90 "in addition to the cost of the visit" which seem to be from £195.
  • I've found that most pharmacies on Pharmadoctor's FInd a Pharmacy button have little or no web presence and often don't explicitly own up to offering private COVID jabs. I've e-mailed a couple to see what they say. Here's a list of pharmacies I've tried but not heard from, mostly for my own records:
...
Continue reading

I've been linked to The benefits of Novavax explained which is optimistic about the strengths of Novavax, suggesting it has the potential to offer longer-term protection, and protection against variants as well.

I think the things the article says or implies about pushback from mRNA vaccine supporters seem unlikely to me -- my guess is that in aggregate Wall Street benefits much more from eliminating COVID than it does from selling COVID treatments, though individual pharma companies might feel differently -- but they seem like the sort of unlikely thing that someone who had reasonable beliefs about the science but spent too much time arguing on Twitter might end up believing. Regardless, I'm left unsure how to feel about its overall reliability, and would welcome thoughts one way or the other.

11
5

If you've read Leif's WIRED article or Poverty is No Pond & have questions for him, I'd love to share them with him & in turn share his answers here.

Thank you, M, for sharing this with me & encouraging me to connect.

Continue reading

I am a GiveWell donor because I want to spend money to improve the world. Should I do something else with that money instead? If so, what?

1Answer by Oscar Delaney8h
My understanding is you are unsupportive of earning-to-give. I agree the trappings of expensive personal luxuries are both substantively bad (often) and poor optics. But the core idea that some people are very lucky and have the opportunity to earn huge amounts of money which they can (and should) then donate, and that this can be very morally valuable, seems right to me. My guess is that regardless of your critiques of specific charities (bednets, deworming, CATF) you still think there are morally important things to do with money. So what do you think of ETG - why is the central idea wrong (if you indeed think that)?
5Answer by Rebecca13h
1. What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number? 2. Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse - defining terms unique is very common. 3. Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional? 1. Does this mean you think prediction markets don’t end up working in practice to hold people to their track records of mid-probability predictions?

This post was cross-posted from the substack Thing of Things with the permission of the author.


In defense of trying things out

The Economist recently published an article, “How poor Kenyans became economists’ guinea pigs,” which critiques development economists’ use of randomized controlled trials. I think it exemplifies the profoundly weird way people think about experiments.

The article says:

In 2018, an RCT run by two development economists, in partnership with the World Bank and the water authority in Nairobi, Kenya’s capital, tracked what happened when water supply was cut off to households in several slum settlements where bills hadn’t been paid. Researchers wanted to test whether landlords, who are responsible for settling the accounts, would become more likely to pay as a result, and whether residents would protest.

Hundreds of residents in slum settlements in Nairobi were left without

...
Continue reading