New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
127
Cullen
4d
0
I am not under any non-disparagement obligations to OpenAI. It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer. I have no further comments at this time.
46
Linch
1d
4
Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else. I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future. To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly sane and reasonable in at least a limited form as a way to prevent leaking trade secrets, and non-disparagement agreements, which prevents you from saying bad things about past employers. The latter seems clearly bad to have for anybody in a position to affect policy. Doubly so if the existence of the non-disparagement agreement itself is secretive.
I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs. * How much do we think that OpenAI's problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same path sooner or later. * Are there any concerns we have with OpenAI that we should be taking this opportunity to put to its peers as well? For example, have peers been publically asked if they use non-disparagement agreements? I can imagine a situation where another org has really just never thought to use them, and we can use this occasion to encourage them to turn that into a public commitment.
I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups: 1. Non-EAs 2. Organisers 3. Existing members of the community Per target group, I'd say it has the following main activities: * Targeting non-EAs, it does comms and education (the VP programme). * Targeting organisers, you have the work of the groups team. * Targeting existing members, you have the events team, the forum team, and community health.  Per target group, these activities are aiming for the following short-term outcomes: * Targeting non-EAs, it doesn't aim to raise awareness of EA, but instead, it aims to ensure people have an accurate understanding of what EA is. * Targeting organisers, it aims to improve their ability to organise. * Targeting existing members, it aims to improve information flow (through EAG(x) events, the forum, newsletters, etc.) and maintain a healthy culture (through community health work). If you're interested, you can see EA Netherland's theory of change here. 
I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22. Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.

Popular comments

Recent discussion

Scarlett Johansson makes a statement about the "Sky" voice, a voice for GPT-4o that OpenAI recently pulled after less than a week of prime time.

tl;dr: OpenAI made an offer last September to Johansson; she refused. They offered again 2 days before the public demo. Scarlett...

Continue reading
9
Geoffrey Miller
My sense is that public opinion has already been swinging against the AI industry (not just OpenAI), and that this is a good and righteous way to slow down reckless AGI 'progress' (i.e. the hubris of the AI industry driving humanity off a cliff).

Maybe I already had a pretty dim view, but this incident did not update me about his character personally (whereas "sign a lifetime nondisparagement agreement within 60 days or lose all of your previously earned equity" did surprise me a bit). 

I did update negatively on his competency/PR skills though. 

11
Geoffrey Miller
My take is this: Whenever Sam Altman behaves like an unprincipled sociopath, yet again, we should update, yet again, in the direction of believing that Sam Altman might be an unprincipled sociopath, who should not be permitted to develop the world's most dangerous technology (AGI).

I'm prepping a new upper-level undergraduate/graduate seminar on 'AI and Psychology', which I'm aiming to start teaching in Jan 2025. I'd appreciate any suggestions that people might have for readings and videos that address the overlap of current AI research (both capabilities...

Continue reading

This course sounds cool! Unfortunately there doesn't seem to be too much relevant material out there. 

This is a stretch, but I think there's probably some cool computational modeling to be done with human value datasets (e.g., 70,000 responses to variations on the trolley problem). What kinds of universal human values can we uncover? https://www.pnas.org/doi/10.1073/pnas.1911517117 

For digestible content on technical AI safety, Robert Miles makes good videos. https://www.youtube.com/c/robertmilesai

yanni kyriacos posted a Quick Take

Two jobs in AI Safety Advocacy that AFAICT don't exist, but should and probably will very soon. Will EAs be the first to create them though? There is a strong first mover advantage waiting for someone -

1. Volunteer Coordinator - there will soon be a groundswell from the general population wanting to have a positive impact in AI. Most won't know how to. A volunteer manager will help capture and direct their efforts positively, for example, by having them write emails to politicians

2. Partnerships Manager - the President of the Voice Actors guild reached out to me recently. We had a very surprising number of cross over in concerns and potential solutions. Voice Actors are the canary in the coal mine. More unions (etc) will follow very shortly. I imagine within 1 year there will be a formalised group of these different orgs advocating together.

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

After Sam Bankman-Fried proved to be a sociopathic fraudster and a massive embarrassment to EA, we did much soul-searching about what EAs did wrong, in failing to detect and denounce his sociopathic traits. We spent, collectively, thousands of hours ruminating about what...

Continue reading

[MY REPLY AS AN IMAGE] > I think there is a > 50% chance he has a psychological condition worth worrying about:

7
Wei Dai
Agreed with the general thrust of this post. I'm trying to do my part, despite a feeling of "PR/social/political skills is so far from what I think of as my comparative advantage. What kind of a world am I living in, that I'm compelled to do these things?"
Ben Millwood posted a Quick Take

I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.

  • How much do we think that OpenAI's problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same path sooner or later.
  • Are there any concerns we have with OpenAI that we should be taking this opportunity to put to its peers as well? For example, have peers been publically asked if they use non-disparagement agreements? I can imagine a situation where another org has really just never thought to use them, and we can use this occasion to encourage them to turn that into a public commitment.
Continue reading
Wei Dai commented on Are AI risks tractable?
16
1

I'm very convinced about the Importance and Neglectedness of AI risks.

What are the best resources to get convinced about the Tractability?

I'm not concerned about many AI Safety projects having ~0 impact, I'm concerned about projects having negative impact (eg. Thoughts ...

Continue reading
Answer by Wei Dai4
1
0

I'm also concerned about many projects having negative impact, but think there are some with robustly positive impact:

  1. Making governments and the public better informed about AI risk, including e.g. what x-safety cultures at AI labs are like, and the true state of alignment progress. Geoffrey Irving is doing this at UK AISI and recruiting, for example.
  2. Try to think of important new arguments/considerations, for example a new form of AI risk that nobody has considered, or new arguments for some alignment approach being likely or unlikely to succeed. (But t
... (read more)

As Shakeel noted on Twitter/X, this is "the closest thing we've got to an IPCC report for AI". 

Below I've pasted info from the link.

Background information

The report was commissioned by the UK government and chaired by Yoshua Bengio, a Turing Award-winning AI academic and member of the UN’s Scientific Advisory Board. The work was overseen by an international Expert Advisory Panel made up of 30 countries including the UK and nominees from nations who were invited to the AI Safety Summit at Bletchley Park in 2023, as well as representatives of the European Union and the United Nations.

The report’s aim is to drive a shared, science-based, up-to-date understanding of the safety of advanced AI systems, and to develop that understanding over time. To do so, the report  brings together world-leading AI countries and the best global AI expertise to analyse the best existing scientific research...

Continue reading

This post was written by Peli Grietzer, inspired by internal writings by TJ (tushant jha), for AOI[1]. The original post, published on Feb 5, 2024, can be found here: https://ai.objectives.institute/blog/the-problem-with-alignment.

The purpose of our work at the AI Objectives Institute (AOI) is to direct the impact of AI towards human autonomy and human flourishing. In the course of articulating our mission and positioning ourselves -- a young organization -- in the landscape of AI risk orgs, we’ve come to notice what we think are serious conceptual problems with the prevalent vocabulary of ‘AI alignment.’ This essay will discuss some of the major ways in which we think the concept of ‘alignment’ creates bias and confusion, as well as our own search for clarifying concepts. 

At AOI, we try to think about AI within the context of humanity’s contemporary institutional structures: How do...

Continue reading