New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

This post was cross-posted from the substack Thing of Things with the permission of the author.


In defense of trying things out

The Economist recently published an article, “How poor Kenyans became economists’ guinea pigs,” which critiques development economists’ use of randomized...

Continue reading
4
Arepo
2h
The Copenhagen interpretation of ethics strikes again.

It's wild for a news organisation that routinely witnesses and reports on tragedies without intervening (as is standard journalistic practice, for good reason) to not recognise it when someone else does it.

2
Karthik Tadepalli
2h
Good post, more detailed thoughts later, but one nitpick: I don't think "most successful RCT" is supposed to mean "most effective intervention" but rather "most influential RCT"; deworming has been picked up by a bunch of NGOs and governments after Miguel and Kremer, plausibly because of that study. (Conflict note: I know and like Ted Miguel)

This is a link-post from the CGD Blog. For the original post and downloadable Note please visit: https://www.cgdev.org/publication/1-trillion-paradox-why-reforming-research-publishing-should-be-global-priority .

----

Our research system is a perplexing paradox. Each year,...

Continue reading

Possibly an infohazard, but would donating to sci-hub be the most cost-effective way to tackle this problem? Piracy had a massive effect on the cost structures of the entertainment industry; even if it didn't remove the big players here it would force them to lower prices. (Moving to preprint servers is hard, given the way major journals control status in established fields).

The only other way out I see is regulation, and lobbying the EU in particular should be somewhat effective (they're pro-regulation, they govern a significant part of this industry, they already previously supported then rescinded Plan S).

1
SummaryBot
8h
Executive summary: Reforming the global research publishing system to improve accessibility, equity, and affordability should be a high priority given the immense potential benefits compared to the relatively modest investments needed. Key points: 1. The current research publishing system is costly, inefficient, and inequitable, with most research locked behind paywalls and dominated by a few for-profit publishers. 2. Rough estimates suggest research publishing reforms could yield benefits in the trillions of dollars by increasing the impact of the $1 trillion in annual global research spending. 3. Reforms could also lead to tens of billions in annual savings by reducing research duplication and increasing researcher productivity. 4. Current resources spent on publishing are sufficient to sustain a reformed system, with potential cost savings of $6.5 billion per year. 5. The technology and initiatives exist to enable reform, but high-level political leadership and international coordination are needed to drive systemic change. 6. Research publishing reform has been relatively neglected as a global policy issue and warrants greater engagement from think tanks, policymakers, and funders.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Inspired by Yonatan's post. Updated 2024/03/27 inspired by John Salter.

TL;DR calendar link: https://calendly.com/codercorgi/60min

Why do I think I’ll be useful?

Since mid-2023, I started formally doing coaching, to help mid-career software engineers land fulfilling roles!...

Continue reading
2
Linda Linsefors
30m
That's awesome!  I will add you to the list right away!

Hers's  the other career coaching options on the list. It case you want to connect with our colleagues. 

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Cross-posted as there may be others interested in educating others about early-stage research fields on this forum. I am considering taking on additional course design projects over the next few months. Learn more about hiring me to consult on course design.

Introduction

...
Continue reading

I do think AISF is a real improvement to the field. My apologies for not making this clear enough.

 

The 80,000 Hours syllabus = "Go read a bunch of textbooks". This is probably not ideal for a "getting started' guide.

You mean MIRI's syllabus? 

I don't remember what 80k's one looked like back in the days, but the one that is up not is not just "Go read a bunch of textbooks".

I personally used CHAI's one and found it very useful.

Also some times you should go read a bunch of text books. Textbooks are great. 

TLDR

Manifold is hosting a festival for prediction markets: Manifest 2024! We’ll have serious talks, attendee-run workshops, and fun side events over the weekend. Chat with special guests like Nate Silver, Scott Alexander, Robin Hanson, Dwarkesh Patel, Cate Hall, and...

Continue reading
1
Saul Munn
4h
Hi Ben! Thanks for your comment. I'm curious what you think the upsides and the downsides are? I'll also add to what Austin said — in general, I think the strategy of [inviting highly accomplished person in field X to a conference about field Y] is underrated to cross-pollinate among and between fields. I think this is especially true of something like prediction markets, where by necessity they're applicable across disciplines; prediction markets are useless absent something on which to predict. This is the main reason I'm in favor of inviting e.g. Rob Miles, Patrick McKenzie, Evan Conrad, Xander Balwit & Nico McCarty, Dwarkesh Patel, etc — many of whom don't actively directly straightforwardly obviously clearly work in prediction markets/forecasting (the way that e.g. Robin Hanson, Nate Silver, or Allison Duettmann do). It's pretty valuable to import intellectual diversity into the prediction market/forecasting community, as well as to export the insights of prediction markets/forecasting to other fields. (And also, a note to both Ben & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )
3
Ben Stewart
2h
Thanks for engaging! Yep I agree with what you said - cross-pollination and interdisciplinary engagement and all that. For context I haven't spent a lot of time looking at the Collins' work, hence light stakes/investment for this discussion. But my impression of their work makes me skeptical that they are "highly accomplished" in any field and I am also very surprised that they would be "thinkers [you] respect" (to borrow from Austin's comment). In terms of their ideas, I think that hosting someone as a speaker at your conference doesn't mean that you endorse all of their ideas. But I think it does mean that you endorse their broad method - how they go about thinking about and communicating their ideas. Looking at the Collins' public output, it's surprising that you would find their work intellectually honest or truth-seeking, which are presumably values of the organisers. I'll leave aside other values which they seem at odds with, which are more serious but harder to discuss. Here are some titles from their Youtube account within the last few months: - "Why the left has to erase the gay male identity" - "Feminists won the culture war but lost at life" - "Is a cult using the trans movement for cover? And how you can protect your kids" - "Starship troopers prove leftist ideology is evil" - "Are woke ideas secretly eugenic? with Ed Dutton" (Ed Dutton is a QAnon-believing, transphobic, white supremacist. They have collaborated with him multiple times in the past few months, I haven't looked further). To be clear, clickbait is fine. It's the tone and ideas that matter. If you think Youtube is a poor forum for intellectual content, compare their output to Rob Miles' youtube content (another speaker).  I think there's a pretty big gulf in how much intellectual respect and endorsement they deserve relative to other potential candidates. Who you respect is your call, but an important factor for whether a conference is good or not is the intellectual taste of the organizer

Meta: Thanks for your response! I recognize that you are under no obligation to comment here, which makes me all the more appreciative that you're continuing the conversation. <3

***

I've engaged with the Collins' content for about a minute or two in total, and with them personally for the equivalent of half an email chain and a tenth of a conversation. Interpersonally, I've found them quite friendly/reasonable people. Their shared panel at the last Manifest was one of the highest rated of the conference; multiple people came up to me to tell me that they... (read more)

Lee

Start forecasting in the Understanding AI Series, a collaboration with tech journalist Timothy B. Lee, whose Understanding AI newsletter provides clarifying context for the most important AI stories of today.

Find out more here, and begin sharing your predictions on questions inspired by Lee's writing on AI, such as:

Continue reading

This post summarizes "Against the Singularity Hypothesis," a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a three...

Continue reading

Here's a gentle introduction to the kinds of worries people have (https://spectrum.ieee.org/power-problems-might-drive-chip-specialization). Of the cited references "the chips are down for moore's law" is probably best on this issue, but a little longer/harder. There's plenty of literature on problems with heat dissipation if you search the academic literature. I can dig up references on energy if you want, but with Sam Altman saying we need a fundamental energy revolution even to get to AGI, is there really much controversy over the idea that we'll need a lot of energy to get to superintelligence? 

4
ElliotJDavies
3h
I feel this claim is disconnected with the definition of the singularity given in the paper:  Further in the paper you write:  [Emphasis mine]. I can't see any reference for either the original definition and later addition of "sustained". 
2
David Thorstad
1h
Ah - that comes from the discontinuity claim. If you have accelerating growth that isn't sustained for very long, you get something like population growth from 1800-2000, where the end result is impressive but hardly a discontinuity comparable to crossing the event horizon of a black hole.  (The only way to go around the assumption of sustained growth would be to post one or a few discontinuous leaps towards superintelligence. But that's harder to defend, and it abandons what was classically taken to ground the singularity hypothesis, namely the appeal to recursive self-improvement). 
EAGxNordics scheduled EAGxNordics 2024 1h ago

EAGxNordics 2024 will take place April 26–28 (Friday-Sunday) in Copenhagen, Denmark, at CPH Conference (DGI Byen, Tietgensgade 65, 1704 København).

EAGxNordics’24 is an Effective Altruism conference organized especially for the Nordic and Baltic EA communities. It features networking, talks, and workshops, and is a great way to connect with people in the EA community, discover new opportunities, and learn more about the community, various projects, and cause areas.

See more details on the official page.

Applications

Apply now here!

Application deadline April 7th.


Who is the event for?

EAGxNordics’24 is primarily for people who are:

  • Familiar with the core ideas of effective altruism
  • Interested in learning more about what to do with these ideas
  • From the Nordic or Baltic countries, living there, or planning on moving there

If you want to attend but are unsure about whether to apply, please err on the ...

Continue reading

TL;DR: Global performance indicators (GPIs) compare countries' policy performance, encouraging competition and pressuring policymakers for reforms. While effective, creating GPIs carries risks such as public backlash. However, certain characteristics can mitigate these ...

Continue reading
2
Jamie_Harris
2h
I'm a big fan of these intervention reports. They're not directly relevant to anything I'm working on right now so I'm only skimming them but they seem high quality to me. I especially appreciate how you both draw on relevant social science external to the movement, and more anecdotal evidence and reasoning specific to animal advocacy. When you summarise the studies, I'd find it more helpful if you summarised the key evidence rather than their all-things-considered views. E.g. in the cost-effectiveness section you mention that costs are low, seeming to assume that the effects would be high enough to justify them. I assume this confidence depends on your reading of the external studies. But from what I see here, without clicking on links, my takeaway is currently something like: "oh so some social scientists think they can work", which doesn't fill me with much confidence given that I don't know what their methods were, how clear the findings were, etc.

Great suggestion, I'll adopt for future reports. Thank you :)

1
SummaryBot
8h
Executive summary: Global performance indicators (GPIs) that rank jurisdictions on animal welfare policies could be an effective and low-cost tool to drive policy changes, if designed well to maximize impact and minimize risks. Key points: 1. GPIs can pressure policymakers to enact reforms by stimulating competition between jurisdictions and attracting media attention. 2. Evidence suggests GPIs can influence policy in desired directions, at least in some contexts, though precise impact is hard to measure. 3. Key risks include public/political backlash and policymakers gaming the system, but these can be mitigated through careful GPI design. 4. Effective GPIs should focus on actionable policies, be updated regularly, come from respected independent sources, and frame criticism respectfully. 5. A small "GPI squad" producing targeted animal welfare GPIs could potentially be highly cost-effective for driving policy changes. 6. After a trial period, a more rigorous cost-effectiveness analysis could determine if the GPI approach is worth continuing at scale.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.