New & upvoted

Customize feedCustomize feed
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Next month, two EAGx events are happening in new locations: Austin and Copenhagen! Applications for these events are closing soon: * Apply to EAGxAustin by this Sunday, March 31 * Apply to EAGxNordics by April 7 These conferences are primarily for people who are at least familiar with the core ideas of effective altruism and are interested in learning more about what to do with these ideas. We're particularly excited to welcome people working professionally in the EA space to connect with others nearby and provide mentorship to those new to the space. If you want to attend but are unsure about whether to apply, please err on the side of applying! If you've applied to attend an EA Global or EAGx event before, you can use the same application for either event.
Social Change Lab has two exciting opportunities for people passionate about social movements, animal advocacy and research to join our team! Director (Maternity Cover) We are looking for a strategic leader to join our team as interim Director. This role will be maternity cover for our current Director (me!) and will be a 12-month contract from July 2024. As Director, you would lead our small team in delivering cutting-edge research on the outcomes and strategies of the animal advocacy and climate movements and ensuring widespread communication of this work to key stakeholders. Research and Communications Officer We also have a potential opportunity for a Research and Communications Officer to join our team for 12 months. Please note this role is dependent on how hiring for our interim Director goes, as we will likely only hire one of these two roles. Please see our Careers page for the full details of both roles and how to apply. If you have any questions about either role, please reach out to Mabli at mabli@socialchangelab.org
(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.) Thoughts on the OpenAI Strategy OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten. First, they say flat out that they're going for AGI. Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back. > "Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1] On Hacker News, one of their employees says, > "We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2] You can read more about this mission on the charter: > "We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. > > Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3] This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing: 1. Make AGI 2. Turn AGI into huge profits 3. Give 100x returns to investors 4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit 5. Use AGI for "the benefit of all"? I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like. Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries. I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI). This would be a massive power gain for a small subset of people. If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI. On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company. And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors. But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves.  ---------------------------------------- (Aside on the details of Step 5) I would love more information on Step 5, but I don’t blame OpenAI for not providing it. * Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people. * Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible. * I assume it’s really hard to actually put together any reasonable plan now for Step 5.  My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious. [1] https://openai.com/blog/openai-lp/ [2] https://news.ycombinator.com/item?id=19360709 [3] https://openai.com/charter/ [4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom [5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/ ---------------------------------------- Also, see: https://www.cnbc.com/2021/03/17/openais-altman-ai-will-make-wealth-to-pay-all-adults-13500-a-year.html Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now. https://www.techtimes.com/articles/258148/20210318/openai-give-13-500-american-adult-anually-sam-altman-world.htm https://moores.samaltman.com/ https://www.reddit.com/r/artificial/comments/m7cpyn/openais_sam_altman_artificial_intelligence_will/
[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)  
A periodic reminder that you can just email politicians and then meet them (see screenshot below).

Popular comments

Recent discussion

Summary

As the Soviet Union collapsed in 1991, the fate of its weapons of mass destruction (WMD) programs presented a new type of catastrophic risk: what would happen to all the nuclear, biological, and chemical weapons and materials, and the scientists who worked on them? The nuclear weapons were distributed across what were about to become four separate countries (Belarus, Kazakhstan, Russia, and Ukraine). Plus, the thousands of experts in those weapons, many of whom went unpaid for months at a time as the Soviet economy collapsed, could be easily tempted to sell information to, or even work directly for, states who were then seeking to build out WMD programs such as Iran and North Korea.

But, by the end of the decade, Belarus, Kazakhstan, and Ukraine had agreed to dismantle or return all their nuclear weapons to Russia[1] and joined the Treaty on the Non-Proliferation of Nuclear Weapons...

Continue reading
Yanni Kyriacos posted a Quick Take 24m ago

I have heard rumours that an AI Safety documentary is being made. Separate to this, a good friend of mine is also seriously considering making one, but he isn't "in" AI Safety. If you know who this first group is and can put me in touch with them, it might be worth getting across each others plans.

Continue reading

[GIF] A feature I'd love on the forum: while posts are read back to you, the part of the text that is being read is highlighted. This exists on Naturalreaders.com and would love to see it here (great for people who have wandering minds like me)


 

Continue reading

For what it's worth, I would find the first part of the issue (i.e. making the player "floating" or "sticky") already quite useful, and it seems much easier to implement.

Identity

In theory of mind, the question of how to define an "individual" is complicated. If you're not familiar with this area of philosophy, see Wait But Why's introduction.

I think most people in EA circles subscribe to the computational theory of mind, which means that any computing device is able to instantiate a sentient being. (In the simplest case, by simply simulating a physical brain in sufficient detail.)

Computationalism does not, on its own, solve the identity problem. If two computers are running the exact same simulation of a person, is destroying one of them equivalent to killing a person, even though there's a backup? What about just turning it off, capable of being turned on later? These are moral questions, not factual ones, and intuitions differ.

Treating each simulation as its own separate moral patient runs into problems once the substrate is taken into account. Consider...

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
This is a linkpost for http://Less.Online/

A Festival of Writers Who are Wrong on the Internet[1]

LessOnline is a festival celebrating truth-seeking, optimization, and blogging. It's an opportunity to meet people you've only ever known by their LessWrong username or Substack handle.

We're running a rationalist conference!

The ticket cost is $400 minus your LW karma in cents.

Confirmed attendees include Scott Alexander, Eliezer Yudkowsky, Katja Grace, and Alexander Wales.

Less.Online

Go through to Less.Online to learn about who's attending, venue, location, housing, relation to Manifest, and more.

We'll post more updates about this event over the coming weeks as it all comes together.

If LessOnline is an awesome rationalist event,
I desire to believe that LessOnline is an awesome rationalist event;

If LessOnline is not an awesome rationalist event,
I desire to believe that LessOnline is not an awesome rationalist event;

Let me not become attached

...
Continue reading

Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than...

Continue reading

From the linked report:

We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage. 

Here's a story I recently heard from someone I trust:

An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before... (read more)

6
Ben_West
6h
Thanks for writing and sharing this Alexander – I thought it was an unusually helpful and transparent post.
30
NickLaing
16h
I really appreciated this report, it seemed one of the most honest and open communications to come out of Open Philanthropy, and it helped me connect with your priorities and vision. A couple of specific things I liked. I appreciated the comment about the Wytham Abby purchase, recognising the flow on effects Open Phil decisions can have on the wider community, and even just acknowledging a mistake - something which is both difficult and uncommon in leadership. "But I still think I personally made a mistake in not objecting to this grant back when the initial decision was made and I was co-CEO. My assessment then was that this wasn’t a major risk to Open Philanthropy institutionally, so it wasn’t my place to try to stop it. I missed how something that could be parodied as an “effective altruist castle” would become a symbol of EA hypocrisy and self-servingness, causing reputational harm to many people and organizations who had nothing to do with the decision or the building." I also liked the admission on slow movement on lead exposure. I had wondered why I hadn't been hearing more on that front given the huge opportunities there and the potential for something like the equivalent of a disease "elimination" with a huge effect on future generations. From what I've seen, my instinct is that it had potential to perhaps be a more clear/urgent/cost-effective focus than other Open Phil areas like air quality. All the best for this year!

I consider myself an advocate for Effective Altruism, attend EA meet-ups, read/reviewed What We Owe The Future, participated in the effective essays competition, and read/post on the forum occasionally. During the late summer and early fall of 2023, I spent a very large number of hours researching/writing a 10k+ word post directed toward the Effective Altruist community.  My proposal was that the use of genetic enhancement technology could be used to benefit humanity, and that it was an overlooked potential cause area. The title was "The Effective Altruist Case for Using Genetic Enhancement to End Poverty.

The thesis of the article is that cognitive ability (as measured by IQ) influences many positive outcomes at the individual and national level. National IQ is positively associated with development and highly associated with the log of GDP/c. I next discussed the unfortunate...

Continue reading

This post was cross-posted from the substack Thing of Things with the permission of the author.


In defense of trying things out

The Economist recently published an article, “How poor Kenyans became economists’ guinea pigs,” which critiques development economists’ use of randomized...

Continue reading
4
Arepo
3h
The Copenhagen interpretation of ethics strikes again.

It's wild for a news organisation that routinely witnesses and reports on tragedies without intervening (as is standard journalistic practice, for good reason) to not recognise it when someone else does it.

2
Karthik Tadepalli
3h
Good post, more detailed thoughts later, but one nitpick: I don't think "most successful RCT" is supposed to mean "most effective intervention" but rather "most influential RCT"; deworming has been picked up by a bunch of NGOs and governments after Miguel and Kremer, plausibly because of that study. (Conflict note: I know and like Ted Miguel)

This is a link-post from the CGD Blog. For the original post and downloadable Note please visit: https://www.cgdev.org/publication/1-trillion-paradox-why-reforming-research-publishing-should-be-global-priority .

----

Our research system is a perplexing paradox. Each year,...

Continue reading

Possibly an infohazard, but would donating to sci-hub be the most cost-effective way to tackle this problem? Piracy had a massive effect on the cost structures of the entertainment industry; even if it didn't remove the big players here it would force them to lower prices. (Moving to preprint servers is hard, given the way major journals control status in established fields).

The only other way out I see is regulation, and lobbying the EU in particular should be somewhat effective (they're pro-regulation, they govern a significant part of this industry, they already previously supported then rescinded Plan S).

1
SummaryBot
9h
Executive summary: Reforming the global research publishing system to improve accessibility, equity, and affordability should be a high priority given the immense potential benefits compared to the relatively modest investments needed. Key points: 1. The current research publishing system is costly, inefficient, and inequitable, with most research locked behind paywalls and dominated by a few for-profit publishers. 2. Rough estimates suggest research publishing reforms could yield benefits in the trillions of dollars by increasing the impact of the $1 trillion in annual global research spending. 3. Reforms could also lead to tens of billions in annual savings by reducing research duplication and increasing researcher productivity. 4. Current resources spent on publishing are sufficient to sustain a reformed system, with potential cost savings of $6.5 billion per year. 5. The technology and initiatives exist to enable reform, but high-level political leadership and international coordination are needed to drive systemic change. 6. Research publishing reform has been relatively neglected as a global policy issue and warrants greater engagement from think tanks, policymakers, and funders.     This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Inspired by Yonatan's post. Updated 2024/03/27 inspired by John Salter.

TL;DR calendar link: https://calendly.com/codercorgi/60min

Why do I think I’ll be useful?

Since mid-2023, I started formally doing coaching, to help mid-career software engineers land fulfilling roles!...

Continue reading
2
Linda Linsefors
2h
That's awesome!  I will add you to the list right away!

Hers's  the other career coaching options on the list. It case you want to connect with our colleagues.