Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else.
I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future.
To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly s...
I'd flag whether a non-disparagement agreement is even enforceable against a Federal government employee speaking in an official capacity. I haven't done any research on that, just saying that I would not merely assume it is fully enforceable.
Any financial interest in an AI lab is generally going to require recusal/disqualification from a number of matters, because a Federal employee is prohibited from participating personally and substantially in any particular matter in which the employee knows they have a financial interest directly and predictably affe...
I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups:
Per target group, I'd say it has the following main activities:
Per target group, these activities are aiming fo...
I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22.
Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.
FWIW EV has been off-boarding its projects, so it isn't surprising that Asterisk is now nested under something else. I don't know anything about Obelus Inc.
I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.
Disclaimer: This shortform contains advice about navigating unemployment benefits. I am not a lawyer or a social worker, and you should use caution when applying this advice to your specific unemployment insurance situation.
Tip for US residents: Depending on which state you live in, taking a work test can affect your eligibility for unemployment insurance.
Unemployment benefits are typically reduced based on the number of hours you've worked in a given week. For example, in New York, you are eligible for the full benefit rate if you worked 10 hours or less ...
Working questions
A mental technique I’ve been starting to use recently: “working questions.” When tackling a fuzzy concept, I’ve heard of people using “working definitions” and “working hypotheses.” Those terms help you move forward on understanding a problem without locking yourself into a frame, allowing you to focus on other parts of your investigation.
Often, it seems to me, I know I want to investigate a problem without being quite clear on what exactly I want to investigate. And the exact question I want to answer is quite important! And instead of ne...
This sounds similar to what David Chapman wrote about in How to think real good; he's mostly talking about solving technical STEM-y research problems, but I think the takeaways apply more broadly:
...Many of the heuristics I collected for “How to think real good” were about how to take an unstructured, vague problem domain and get it to the point where formal methods become applicable.
Formal methods all require a formal specification of the problem. For example, before you can apply Bayesian methods, you have to specify what all the hypotheses are, what sorts
From Richard Y Chappell's post Theory-Driven Applied Ethics, answering "what is there for the applied ethicist to do, that could be philosophically interesting?", emphasis mine:
...A better option may be to appeal to mid-level principles likely to be shared by a wide range of moral theories. Indeed, I think much of the best work in applied ethics can be understood along these lines. The mid-level principles may be supported by vivid thought experiments (e.g. Thomson’s violinist, or Singer’s pond), but these hypothetical scenarios are taken to be practically il
I find it encouraging that EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies (c.f. Why Not Slow AI Progress?). Previously, I worried that social/professional entanglements and image concerns would lead EAs to align with AI companies even after receiving clear signals that AI companies are not interested in safety. I'm glad to have been wrong about that.
Caveat: we've only seen this kind of scrutiny applied to OpenAI and it remains to be seen whether Anthropic and DeepMind will get the same scrutiny.
I think Kelsey Piper's article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense "we shouldn't work with AI companies", but I can't imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I can't imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.
To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then),...
Topics (AKA wiki pages[1] or tags[2]) are used to organise Forum posts into useful groupings. They can be used to give readers context on a debate that happens only intermittently (see Time of Perils), collect news and events which might interest people in a certain region (see Greater New York City Area), collect the posts by an organisation, or, perhaps most importantly, collect all the posts on a particular subject (see Prediction Markets).
Any user can submit and begin using...
I spent way too much time organizing my thoughts on AI loss-of-control ("x-risk") debates without any feedback today, so I'm publishing perhaps one of my favorite snippets/threads:
A lot of debates seem to boil down to under-acknowledged and poorly-framed disagreements about questions like “who bears the burden of proof.” For example, some skeptics say “extraordinary claims require extraordinary evidence” when dismissing claims that the risk is merely “above 1%”, whereas safetyists argue that having >99% confidence that things won’t go wrong is the “extr...
The current board is:
The only people here who ...
Most possible goals for AI systems are concerned with process as well as outcomes.
People talking about possible AI goals sometimes seem to assume something like "most goals are basically about outcomes, not how you get there". I'm not entirely sure where this idea comes from, and I think it's wrong. The space of goals which are allowed to be concerned with process is much higher-dimensional than the space of goals which are just about outcomes, so I'd expect that on most reasonable sense of "most" process can have a look-in.
What's the interaction with inst...
In the past few weeks, I spoke with several people interested in EA and wondered: What do others recommend in this situation in terms of media to consume first (books, blog posts, podcasts)?
Isn't it time we had a comprehensive guide on which introductory EA books or media to recommend to different people, backed by data?
Such a resource could consider factors like background, interests, and learning preferences, ensuring the most impactful material is suggested for each individual. Wouldn’t this tailored approach make promoting EA among friends and acquaintances more effective and engaging?
Swapcard tips:
You can use Firefox/Safari/Chrome etc. on your phone, go to swapcard.com and use that instead of downloading the Swapcard app from your app store. As far as I know, the only thing the app has that the mobile site does not, is the QR code that you need when signing in when you first get to the venue and pick up your badge
The other fields, like 'How can I help othe...
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual...
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively...
In food ingredient labeling, some food items do not require expending. E.g, Article 19 from the relevant EU regulation:
... (read more)Exempting alt proteins seems unlikely to me. The presumed rationale for this exemption is that these are close to single-ingredient foodstuffs whose single ingredient is (or whose few ingredients are) obvious, so requiring them to bear an ingredient list is pointless.