Quick takes

In food ingredient labeling, some food items do not require expending. E.g, Article 19 from the relevant EU regulation:

  1. The following foods shall not be required to bear a list of ingredients:
    1. fresh fruit and vegetables, including potatoes, which have not been peeled, cut or similarly treated;
    2. carbonated water, the description of which indicates that it has been carbonated;
    3. fermentation vinegars derived exclusively from a single basic product, provided that no other ingredient has been added;
    4. cheese, butter, fermented milk and cream, to which no ingredient has
... (read more)

Exempting alt proteins seems unlikely to me. The presumed rationale for this exemption is that these are close to single-ingredient foodstuffs whose single ingredient is (or whose few ingredients are) obvious, so requiring them to bear an ingredient list is pointless.

Do we know if @Paul_Christiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else.

I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future.

To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly s... (read more)

I'd flag whether a non-disparagement agreement is even enforceable against a Federal government employee speaking in an official capacity. I haven't done any research on that, just saying that I would not merely assume it is fully enforceable.

Any financial interest in an AI lab is generally going to require recusal/disqualification from a number of matters, because a Federal employee is prohibited from participating personally and substantially in any particular matter in which the employee knows they have a financial interest directly and predictably affe... (read more)

6
Pablo
Couldn't secretive agreements be mostly circumvented simply by directly asking the person whether they signed such an agreement? If they fail to answer, the answer is very likely 'Yes', especially if one expects them to answer 'Yes' to a parallel question in scenarios where they had signed a non-secretive agreement.
4
Ulrik Horn
Would it go some way to answer the question if an ex-lab person has said something pretty bad about their past employer? Because this would in my simplistic world view mean either that they do not care about legal consequences or that they do not have such an agreement. And I think, perhaps naively that both of these would make me trust the person to some degree.

I don't think CEA has a public theory of change, it just has a strategy. If I were to recreate its theory of change based on what I know of the org, it'd have three target groups:

  1. Non-EAs
  2. Organisers
  3. Existing members of the community

Per target group, I'd say it has the following main activities:

  • Targeting non-EAs, it does comms and education (the VP programme).
  • Targeting organisers, you have the work of the groups team.
  • Targeting existing members, you have the events team, the forum team, and community health. 

Per target group, these activities are aiming fo... (read more)

They has been writings from CEA on movement-building strategy. I think you might find them in the organiser handbook. These likely aren't to date though, especially since there's a new CEO.

I just looked at [ANONYMOUS PERSON]'s donations. The amount that this person has donated in their life is more than double the amount that I have ever earned in my life. This person appears to be roughly the same age as I am (we graduated from college ± one year of each other). Oof. It makes me wish that I had taken steps to become a software developer back when I was 15 or 18 or 22.

Oh, well. As they say, comparison is the thief of joy. I'll try to focus on doing the best I can with the hand I'm dealt.

Showing 3 of 4 replies (Click to show all)
2
yanni kyriacos
Hi Joseph :) Based on what you've written I'm going to guess you have probably donate more than 99% of the world's population to effective charities. So you're probably crushing it!

Haha, thanks for bringing a smile to my face.

11
Joseph Lemien
Because my best estimate is that there are different steps toward different paths that would be better than trying to rewind life back to college age and start over. Like the famous Sylvia Plath quote about life branching like a fig tree, unchosen paths tend to wither away. I think that becoming a software developer wouldn't be the best path for me at this point: cost of tuition, competitiveness of the job market for entry-level developers, age discrimination, etc. Being a 22-year old fresh grad with a bachelor's degree in computer science in 2010 is quite a different scenario than being a 40-year old who is newly self-taught through Free Code Camp in 202X. I predict that the former would tend to have a lot of good options (with wide variance, of course), while the latter would have fewer good options. If there was some sort of 'guarantee' regarding a good job offer or if a wealthy benefactor offered to cover tuition and cost of living while I learn then I would give training/education very serious consideration, but my understanding is that the 2010s were an abnormally good decade to work in tech, and there is now a glut of entry-level software developers.

Time to cancel my Asterisk subscription?

 

 

So Asterisk dedicates a whole self-aggrandizing issue to California, leaves EV for Obelus (what is Obelus?), starts charging readers, and, worst of all, celebrates low prices for eggs and milk?

Showing 3 of 7 replies (Click to show all)

FWIW EV has been off-boarding its projects, so it isn't surprising that Asterisk is now nested under something else. I don't know anything about Obelus Inc. 

1
Karthik Tadepalli
I see, fair enough.
1
Linch
You should cancel if you think it's not worth the money. The other reasons seem worse.

I wonder how the recent turn for the worse at OpenAI should make us feel about e.g. Anthropic and Conjecture and other organizations with a similar structure, or whether we should change our behaviour towards those orgs.

  • How much do we think that OpenAI's problems are idiosyncratic vs. structural? If e.g. Sam Altman is the problem, we can still feel good about peer organisations. If instead weighing investor concerns and safety concerns is the root of the problem, we should be worried about whether peer organizations are going to be pushed down the same p
... (read more)

Disclaimer: This shortform contains advice about navigating unemployment benefits. I am not a lawyer or a social worker, and you should use caution when applying this advice to your specific unemployment insurance situation.

Tip for US residents: Depending on which state you live in, taking a work test can affect your eligibility for unemployment insurance.

Unemployment benefits are typically reduced based on the number of hours you've worked in a given week. For example, in New York, you are eligible for the full benefit rate if you worked 10 hours or less ... (read more)

Working questions

A mental technique I’ve been starting to use recently: “working questions.” When tackling a fuzzy concept, I’ve heard of people using “working definitions” and “working hypotheses.” Those terms help you move forward on understanding a problem without locking yourself into a frame, allowing you to focus on other parts of your investigation.

Often, it seems to me, I know I want to investigate a problem without being quite clear on what exactly I want to investigate. And the exact question I want to answer is quite important! And instead of ne... (read more)

This sounds similar to what David Chapman wrote about in How to think real good; he's mostly talking about solving technical STEM-y research problems, but I think the takeaways apply more broadly:

Many of the heuristics I collected for “How to think real good” were about how to take an unstructured, vague problem domain and get it to the point where formal methods become applicable.

Formal methods all require a formal specification of the problem. For example, before you can apply Bayesian methods, you have to specify what all the hypotheses are, what sorts

... (read more)

From Richard Y Chappell's post Theory-Driven Applied Ethics, answering "what is there for the applied ethicist to do, that could be philosophically interesting?", emphasis mine:

A better option may be to appeal to mid-level principles likely to be shared by a wide range of moral theories. Indeed, I think much of the best work in applied ethics can be understood along these lines. The mid-level principles may be supported by vivid thought experiments (e.g. Thomson’s violinist, or Singer’s pond), but these hypothetical scenarios are taken to be practically il

... (read more)

I find it encouraging that EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies (c.f. Why Not Slow AI Progress?). Previously, I worried that social/professional entanglements and image concerns would lead EAs to align with AI companies even after receiving clear signals that AI companies are not interested in safety. I'm glad to have been wrong about that.

Caveat: we've only seen this kind of scrutiny applied to OpenAI and it remains to be seen whether Anthropic and DeepMind will get the same scrutiny.

5
Lorenzo Buonanno
I don't think it's accurate to say that "EAs have quickly pivoted to viewing AI companies as adversaries, after a long period of uneasily viewing them as necessary allies." My understanding is that no matter how you define "EAs," many people have always been supportive of working with/at AI companies, and many others sceptical of that approach.

I think Kelsey Piper's article marks a huge turning point. In 2022, there were lots of people saying in an abstract sense "we shouldn't work with AI companies", but I can't imagine that article being written in 2022. And the call for attorneys for ex-OpenAI employees is another step so adversarial I can't imagine it being taken in 2022. Both of these have been pretty positively received, so I think they reflect a real shift in attitudes.

To be concrete, I imagine if Kelsey wrote an article in 2022 about the non disparagement clause (assume it existed then),... (read more)

Draft guidelines for new topic tags (feedback welcome)



Topics (AKA wiki pages[1] or tags[2]) are used to organise Forum posts into useful groupings. They can be used to give readers context on a debate that happens only intermittently (see Time of Perils), collect news and events which might interest people in a certain region (see Greater New York City Area), collect the posts by an organisation, or, perhaps most importantly, collect all the posts on a particular subject (see Prediction Markets). 

Any user can submit and begin using... (read more)

I spent way too much time organizing my thoughts on AI loss-of-control ("x-risk") debates without any feedback today, so I'm publishing perhaps one of my favorite snippets/threads:

A lot of debates seem to boil down to under-acknowledged and poorly-framed disagreements about questions like “who bears the burden of proof.” For example, some skeptics say “extraordinary claims require extraordinary evidence” when dismissing claims that the risk is merely “above 1%”, whereas safetyists argue that having >99% confidence that things won’t go wrong is the “extr... (read more)

Are there currently any safety-conscious people on the OpenAI Board?

huw
12
2
1
1

The current board is:

  • Bret Taylor (chair): Co-created Google Maps, ex-Meta CTO, ex-Twitter Chairperson, current co-founder of Sierra (AI company)
  • Larry Summers: Ex U.S. Treasury Secretary, Ex Harvard president
  • Adam D'Angelo: Co-founder, CEO Quora
  • Dr. Sue Desmond-Hellmann: Ex-director P&G, Meta, Bill & Melinda Gates; Ex-chancellor UCSF. Pfizer board member
  • Nicole Seligman: Ex-Sony exec, Paramount board member
  • Fidji Simo: CEO & Chair Instacart, Ex-Meta VP
  • Sam Altman
  • Also, Microsoft are allowed to observe board meetings

The only people here who ... (read more)

Remember: EA institutions actively push talented people into the companies making the world changing tech the public have said THEY DONT WANT. This is where the next big EA PR crisis will come from (50%). Except this time it won’t just be the tech bubble.

8
harfe
Is this about the safety teams at capabilities labs? If so, I consider it a non-obvious issue, whether pushing a talented people into an AI safety role at, e.g., DeepMind is a bad thing. If you think that is a bad thing, consider providing a more detailed argument, and writing a top-level post explaining your view. If, instead, this is about EA institutions pushing people into capabilities roles, consider naming these concrete examples. As an example, 80k has a job advertising a role as a prompt engineer at Scale AI. That does not seem to be a very safety-focused role, and it is not clear how 80k wants to help prevent human extinction with that job ad.

Most possible goals for AI systems are concerned with process as well as outcomes.

People talking about possible AI goals sometimes seem to assume something like "most goals are basically about outcomes, not how you get there". I'm not entirely sure where this idea comes from, and I think it's wrong. The space of goals which are allowed to be concerned with process is much higher-dimensional than the space of goals which are just about outcomes, so I'd expect that on most reasonable sense of "most" process can have a look-in.

What's the interaction with inst... (read more)

In the past few weeks, I spoke with several people interested in EA and wondered: What do others recommend in this situation in terms of media to consume first (books, blog posts, podcasts)?

Isn't it time we had a comprehensive guide on which introductory EA books or media to recommend to different people, backed by data?

Such a resource could consider factors like background, interests, and learning preferences, ensuring the most impactful material is suggested for each individual. Wouldn’t this tailored approach make promoting EA among friends and acquaintances more effective and engaging?

Swapcard tips:

  1. The mobile browser is more reliable than the app

You can use Firefox/Safari/Chrome etc. on your phone, go to swapcard.com and use that instead of downloading the Swapcard app from your app store. As far as I know, the only thing the app has that the mobile site does not, is the QR code that you need when signing in when you first get to the venue and pick up your badge

  1. Only what you put in the 'Biography' section in the 'About Me' section of your profile is searchable when searching in Swapcard

The other fields, like 'How can I help othe... (read more)

This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:

The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual... (read more)

Showing 3 of 4 replies (Click to show all)
25
Thomas Kwa
I want to slightly push back against this post in two ways: * I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy-- I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists. * Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of "doing a lot more good matters a lot more" is really important, but it is still trading off against other values. * Helping people closer to you / in your community: many people think this has inherent value * Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh. * Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they

Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom's Against Empathy book, and how when I read that I thought something like: "oh yeah empathy really isn't the best guide to acting morally," and whether that view contradicts what I was expressing in my quick take above.

I think I probably should have framed the post more as "longtermism need not be totally cold and utilitarian," and that there's an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively... (read more)

16
Tyler Johnston
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that's confusing though. BTW, my personal views lean towards a suffering-focused ethics that isn't seeking to create happy people for their own sake. But I still think that, in coming to that view, I'm concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That's my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn't consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines. Agreed there are other issues with longtermism — just wanted to respond to the "it's not about care or empathy" critique.

[PHOTO] I sent 19 emails to politicians, had 4 meetings, and now I get emails like this. There is SO MUCH low hanging fruit in just doing this for 30 minutes a day (I would do it but my LTFF funding does not cover this). Someone should do this!

Showing 3 of 8 replies (Click to show all)
1
yanni kyriacos
Why am I so bad at this Stephen. Send help.
6
Linch
(Speaking as someone on LTFF, but not on behalf of LTFF)  How large of a constraint is this for you? I don't have strong opinions on whether this work is better than what you're funded to do, but usually I think it's bad if LTFF funding causes people to do things that they think is less (positively) impactful!  We probably can't fund people to do things that are lobbying or lobbying-adjacent, but I'm keen to figure out or otherwise brainstorm an arrangement that works for you.

Hey Linch, thanks for reaching out! Maybe send me your email or HMU here yannikyriacos@gmail.com

Load more