New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+
202
· 5d ago · 9m read

Posts tagged community

Quick takes

Show community
View more
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in. I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
21
tlevin
1d
1
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable. I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves and shrink the size of the ask for their ideal policy than of pushing for their ideal vision and then making concessions. Possibly an ideal ecosystem has both strategies, but it seems possible that at least some versions of "Overton Window-moving" strategies executed in practice have larger negative effects via associating their "side" with unreasonable-sounding ideas in the minds of very bandwidth-constrained policymakers, who strongly lean on signals of credibility and consensus when quickly evaluating policy options, than the positive effects of increasing the odds of ideal policy and improving the framing for non-ideal but pretty good policies. In theory, the Overton Window model is just a description of what ideas are taken seriously, so it can indeed accommodate backfire effects where you argue for an idea "outside the window" and this actually makes the window narrower. But I think the visual imagery of "windows" actually struggles to accommodate this -- when was the last time you tried to open a window and accidentally closed it instead? -- and as a result, people who rely on this model are more likely to underrate these kinds of consequences. Would be interested in empirical evidence on this question (ideally actual studies from psych, political science, sociology, econ, etc literatures, rather than specific case studies due to reference class tennis type issues).
Excerpt from the most recent update from the ALERT team:   Highly pathogenic avian influenza (HPAI) H5N1: What a week! The news, data, and analyses are coming in fast and furious. Overall, ALERT team members feel that the risk of an H5N1 pandemic emerging over the coming decade is increasing. Team members estimate that the chance that the WHO will declare a Public Health Emergency of International Concern (PHEIC) within 1 year from now because of an H5N1 virus, in whole or in part, is 0.9% (range 0.5%-1.3%). The team sees the chance going up substantially over the next decade, with the 5-year chance at 13% (range 10%-15%) and the 10-year chance increasing to 25% (range 20%-30%).   their estimated 10 year risk is a lot higher than I would have anticipated.
Is EA as a bait and switch a compelling argument for it being bad? I don't really think so 1. There are a wide variety of baits and switches, from what I'd call misleading to some pretty normal activities - is it a bait and switch when churches don't discuss their most controversial beliefs at a "bring your friends" service? What about wearing nice clothes to a first date? [1] 2. EA is a big movement composed of different groups[2]. Many describe it differently. 3. EA has done so much global health stuff I am not sure it can be described as a bait and switch. eg https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit#gid=9418963 4. EA is way more transparent than any comparable movement. If it is a bait and switch then it does so much more to make clear where the money goes eg (https://openbook.fyi/). On the other hand: 1. I do sometimes see people describing EA too favourably or pushing an inaccurate line.   I think that transparency comes with a feature of allowing anyone to come and say "what's going on there" and that can be very beneficial at avoiding error but also bad criticism can be too cheap.  Overall I don't find this line that compelling. And that parts that are seem largely in the past when EA was smaller (when perhaps it mattered less). Now that EA is big, it's pretty clear that it cares about many different things.  Seems fine.  1. ^ @Richard Y Chappell created the analogy.  2. ^ @Sean_o_h argues that here. 
I can't find a better place to ask this, but I was wondering whether/where there is a good explanation of the scepticism of leading rationalists about animal consciousness/moral patienthood. I am thinking in particular of Zvi and Yudkowsky. In the recent podcast with Zvi Mowshowitz on 80K, the question came up a bit, and I know he is also very sceptical of interventions for non-human animals on his blog, but I had a hard time finding a clear explanation of where this belief comes from. I really like Zvi's work, and he has been right about a lot of things I was initially on the other side of, so I would be curious to read more of his or similar people's thoughts on this. Seems like potentially a place where there is a motivation gap: non-animal welfare people have little incentive to convince me that they think the things I work on are not that useful.

Popular comments

Recent discussion

This is a cross-post and you can see the original here, written in 2022. I am not the original author, but I thought it was good for more EAs to know about this. 

I am posting anonymously for obvious reasons, but I am a longstanding EA who is concerned about Torres's...

Continue reading

Hi Mark,

I wonder if you'd be willing to do something along the lines of privately verifying that your identity is roughly as described in your post? I think this could be pretty straightforward, and might help a bunch in making things clear and low-drama. (At present you're stating that the claims about your identify are a fabrication, but there's no way for external parties to verify this.)

I think from something like a game-theoretic perspective, absent some verification it will be reasonable for observers to assume that Torres is correct that the anonymo... (read more)

2
Jason
15m
Is there any way for us to validate that your EA Forum account and the substack are controlled by the same person?
4
Jason
18m
I don't think @titotal was specifically commenting on any allegation that you harassed Torres. It's almost impossible for third parties to rule the more general allegations against unnamed anonymous persons in or out.
  • SoGive works with major donors.
  • As part of our work, we meet with several (10-30 per year) charities, generally ones recommended by evaluators we trust, or (occasionally) recommended by our own research.
  • We learn a lot through these conversations. This suggests that we might want to publish our call notes so that others can also learn about the charities we speak with.
  • Given that we take notes during the calls anyway, it might seem that it would be low cost for us to simply publish those. This would be deceptive. 
    • There is a non-trivial time cost for us, partly because documents which are published are held to a higher standard than those which are purely internal, but mostly because of our relationship with the charities. We want them to feel confident that they can speak openly with us. This means not only an extra step in the process (ie sharing a draft with the organisation
...
Continue reading

Summary

  1. Where there’s overfishing, reducing fishing pressure or harvest rates — roughly the share of the population or biomass caught in a fishery per fishing period — actually allows more animals to be caught in the long run.
  2. Sustainable fishery management policies
...
Continue reading

Just the arguments in the summary are really solid.[1] And while I wasn't considering supporting sustainability in fishing anyway, I now believe it's more urgent to culturally/semiotically/associatively separate between welfare and some strands of "environmentalism". Thanks!

Alas, I don't predict I will work anywhere where this update becomes pivotal to my actions, but my practically relevant takeaway is: I will reproduce the arguments from this post (and/or link it) in contexts where people are discussing conjunctions/disjunctions between environmenta... (read more)

Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he ...

Continue reading

The full quote suggests this is because he classifies Operation Warp Speed (reactive, targeted) as very different from the Office (wasteful, impossible to predict what you'll need, didn't work last time). I would classify this as a disagreement about means rather than ends.

 

One last question, Mr. President, because I know that your time is limited, and I appreciate your generosity. We have just reached the four-year anniversary of the COVID pandemic. One of your historic accomplishments was Operation Warp Speed. If we were to have another pandemic, would you take the same actions to manufacture and distribute a vaccine and get it in the arms of Americans as quickly as possible?

Trump: I did a phenomenal job. I appreciate the way you worded that question. So I have a very important Democrat friend, who probably votes for me, but I'm not 100% sure, because he's a serious Democrat, and he asked me about it. He said Operation Warp Speed was one of the greatest achievements in the history of government. What you did was incredible, the speed of it, and the, you know, it was supposed to take anywhere from five to 12 years, the whole thing. Not only that: the ventilators, the therapeutics, Regeneron and other things. I mean Regeneron was incredible. But therapeutics—everything. The overall—Operation Warp Speed, and you never talk about it. Democrats talk about it as if it’s the greatest achievement. So I don’t talk about it. I let others talk about it. 

You know, you have strong opinions both ways on the vaccines. It's interesting. The Democrats love the vaccine. The Democrats. Only reason I don’t take credit for it. The Republicans, in many cases, don’t, although many of them got it, I can tell you. It’s very interesting. Some of the ones who talk the most. I said, “Well, you didn’t have it did you?” Well, actually he did, but you know, et cetera. 

But Democrats think it’s an incredible, incredible achievement, and they wish they could take credit for it, and Republicans don’t. I don't bring it up. All I do is just, I do the right thing. And we've gotten actually a lot of credit for Operation Warp Speed. And the power and the speed was incredible. And don’t forget, when I said, nobody had any idea what this was. You know, we’re two and a half years, almost three years, nobody ever. Everybody thought of a pandemic as an ancient problem. No longer a modern problem, right? You know, you don't think of that? You hear about 1917 in Europe and all. You didn’t think that could happen. You learned if you could. But nobody saw that coming and we took over, and I’m not blaming the past administrations at all, because again, nobody saw it coming. But the cupboards were bare. 

We had no gowns, we had no masks. We had no goggles, we had no medicines. We had no ventilators. We had nothing. The cupboards were totally bare. And I energized the country like nobody’s ever energized our country. A lot of people give us credit for that. Unfortunately, they’re mostly Democrats that give me the credit.

Well, sir, would you do the same thing again to get vaccines in the arms of Americans as quickly as possible, if it happened again in the next four years?

Trump: Well, there are the variations of it. I mean, you know, we also learned when that first came out, nobody had any idea what this was, this was something that nobody heard of. At that time, they didn’t call it Covid. They called it various names. Somehow they settled on Covid. It was the China virus, various other names. 

But when this came along, nobody had any idea. All they knew was dust coming in from China. And there were bad things happening in China around Wuhan. You know, I predicted. I think you'd know this, but I was very strong on saying that this came from Wuhan. And it came from the Wuhan labs. And I said that from day one. Because I saw things that led me to believe that, very strongly led me to believe that. But I was right on that. A lot of people say that now that Trump really did get it right. A lot of people said, “Oh, it came from caves, or it came from other countries.” China was trying to convince people that it came from Italy and France, you know, first Italy, then France. I said, “No, it came from China, and it came from the Wuhan labs.” And that's where it ended up coming from. So you know, and I said that very early. I never said anything else actually. But I've been given a lot of credit for Operation Warp Speed. But most of that credit has come from Democrats. And I think a big portion of Republicans agree with it, too. But a lot of them don't want to say it. They don't want to talk about it.

So last follow-up: The Biden Administration created the Office of Pandemic Preparedness and Response Policy, a permanent office in the executive branch tasked with preparing for epidemics that have not yet emerged. You disbanded a similar office in 2018 that Obama had created. Would you disband Biden's office, too?

Trump: Well, he wants to spend a lot of money on something that you don't know if it's gonna be 100 years or 50 years or 25 years. And it's just a way of giving out pork. And, yeah, I probably would, because I think we've learned a lot and we can mobilize, you know, we can mobilize. A lot of the things that you do and a lot of the equipment that you buy is obsolete when you get hit with something. And as far as medicines, you know, these medicines are very different depending on what strains, depending on what type of flu or virus it may be. You know, things change so much. So, yeah, I think I would. It doesn't mean that we're not watching out for it all the time. But it's very hard to predict what's coming because there are a lot of variations of these pandemics. I mean, the variations are incredible, if you look at it. But we did a great job with the therapeutics. And, again, these therapeutics were specific to this, not for something else. So, no, I think it's just another—I think it sounds good politically, but I think it's a very expensive solution to something that won't work. You have to move quickly when you see it happening.

 

link

Trump is anti-tackling pandemics except insofar as it implies he did anything wrong

I'd say it's 50/50 but sure. And while politics is discouraged, I don't think that your thing is really what's being discouraged.

 

A crucial consideration in assessing the risks of advanced AI is the moral value we place on "unaligned" AIs—systems that do not share human preferences—which could emerge if we fail to make enough progress on technical alignment.

In this post I'll consider three potential...

Continue reading
4
Rohin Shah
14h
I can believe that if the population you are trying to predict for is just humans, almost all of whom have at least some affective empathy. But I'd feel pretty surprised if this were true in whatever distribution over unaligned AIs we're imagining. In particular, I think if there's no particular reason to expect affective empathy in unaligned AIs, then your prior on it being present should be near-zero (simply because there are lots of specific claims about unaligned AIs about that complicated most of which will be false). And I'd be surprised if "zero vs non-zero affective empathy" was not predictive of utilitarian motivations. I definitely agree that AIs might feel pleasure and pain, though I'm less confident in it than you seem to be. It just seems like AI cognition could be very different from human cognition. For example, I would guess that pain/pleasure are important for learning in humans, but it seems like this is probably not true for AI systems in the current paradigm. (For gradient descent, the learning and the cognition happen separately -- the AI cognition doesn't even get the loss/reward equivalent as an input so cannot "experience" it. For in-context learning, it seems very unclear what the pain/pleasure equivalent would be.) I agree this is possible. But ultimately I'm not seeing any particularly strong reasons to expect this (and I feel like your arguments are mostly saying "nothing rules it out"). Whereas I do think there's a strong reason to expect weaker tendencies: AIs will be different, and on average different implies fewer properties that humans have. So aggregating these I end up concluding that unaligned AIs will be less utilitarian in expectation. (You make a bunch of arguments for why AIs might not be as different as we expect. I agree that if you haven't thought about those arguments before you should probably reduce your expectation of how different AIs will be. But I still think they will be quite different.) I don't see why it mat

Here are a few (long, but high-level) comments I have before responding to a few specific points that I still disagree with:

  • I agree there are some weak reasons to think that humans are likely to be more utilitarian on average than unaligned AIs, for basically the reasons you talk about in your comment (I won't express individual agreement with all the points you gave that I agree with, but you should know that I agree with many of them). 

    However, I do not yet see any strong reasons supporting your view. (The main argument seems to be: AIs will be diff
... (read more)

This is a linkpost for Imitation Learning is Probably Existentially Safe by Michael Cohen and Marcus Hutter.

Abstract

Concerns about extinction risk from AI vary among experts in the field. But AI encompasses a very broad category of algorithms. Perhaps some algorithms would

...
Continue reading
4
Matthew_Barnett
4h
This seems false. Plenty of people want wealth and power, which are "conducive to gaining control over [parts of] humanity". It is true that no single person has ever gotten enough power to actually get control over ALL of humanity, but that's presumably because of the difficulty of obtaining such a high level of power, rather than because few humans have ever pursued the capabilities that would be conducive towards that goal. Again, this distinction is quite important. I agree that a good imitator AI would likely share our disposition towards diminishing marginal returns to resource accumulation. This makes it likely that such AIs would not take very large risks. However, I still think the main reason why no human has ever taken control over humanity is because there was no feasible strategy that any human in the past could have taken to obtain such a high degree of control, rather than because all humans in the past have voluntarily refrained from taking the risks necessary to obtain that degree of control. In fact, risk-neutral agents that don't experience diminishing returns to resource consumption will asymptotically eventually lose all their wealth in high-risk bets. Therefore, even without this human imitation argument, we shouldn't be much concerned about risk-neutral agents in most scenarios (including risks from reinforcement learners) since they're very likely to go bankrupt before they ever get to the point at which they can take over the world. Such agents are only importantly relevant in a very small fraction of worlds. Again, the fact that humans acquire power gradually is more of a function of our abilities than it is a function of our desires. I repeat myself but this is important: these are critical facts to distinguish from each other. "Ability to" and "desire to" are very different features of the situation. It is very plausible to me that some existing humans would "foom" if they had the ability. But in fact, no human has such an ability, so
2
Vasco Grilo
3h
Thanks for following up, Matthew. I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others. In addition, if a single AI system expressed such a desire, humans would not want to scale up its capabilities. I agree biological humans will likely become an increasingly small fraction of the world, but it does not follow that AI carries a great risk to humas[1]. I would not say people born after 1960 carry a great risk risk to people born before 1960, even though the fraction of the global resources controlled by the latter is becoming increasingly small. I would consider that AI poses a great risk to humans if these were expected to suffer significantly more than in their typical lives, which also involve suffering, in the process of losing control over resources. 1. ^ You said "risk to humanity" instead of "risk to humans". I prefer this because humanity is sometimes used to include other beings.

I agree, but I think very few people want to acquire e.g. 10 T$ of resources without broad consent of others.

I think I simply disagree with the claim here. I think it's not true. I think many people would want to acquire $10T without the broad consent of others, if they had the ability to obtain such wealth (and they could actually spend it; here I'm assuming they actually control this quantity of resources and don't get penalized because of the fact it was acquired without the broad consent of others, because that would change the scenario). It may be tha... (read more)

This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.

Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...

Continue reading

In your recent 80k podcast almost all the work referenced seems to be targeted at the US and EU (except the Farm animal welfare in Asia section).

  • What is the actual geographic target of the work that’s being funded?
  • Is there work being done/planed to look at animal welfare funding opportunities more globally?
5
akleinman
5h
What are your thoughts on replicating the success of prop 12/question 3 in new states as well as campaigning for new initiatives in Massachusetts and California (e.g. chick culling ban)? Is anyone working on this?
4
Vasco Grilo
8h
Do you think Open Philanthropy's animal welfare grants should have write-ups whose main text is longer than 1 paragraph? I think it would be great if you shared the cost-effectiveness analyses you seem to be doing. In your recent appearance on The 80,000 Hours Podcast (which I liked!), you said (emphasis mine): To be clear, the main text of the write-ups of Open Philanthropy’s large grants is 1 paragraph across all areas, not just the ones related to animal welfare. However, I wonder whether there would be freedom for a given area to share more information (in grant write-ups or reports) if the people leading it thought that to be valuable.

It’s happening. Our second international protest. The goal: convince the few powerful individuals (ministers) who will be visiting the next AI Safety Summit (the 22nd of May) to be the adults in the room. It’s up to us to make them understand that this shit is real, that they are the only ones who have the power to fix the problem. Join our discord (https://discord.gg/EFDQt6RBR7) to coordinate about the protests.

Check out the international protesting listing on our website for information about other locations: https://pauseai.info/2024-may

Continue reading

We just published an interview: Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives. Listen on Spotify or click through for other audio options, the transcript, and related links. Below are the episode summary and some key excerpts.

Episode summary

I work in a place called Uttar Pradesh, which is a state in India with 240 million people. One in every 33 people in the whole world lives in Uttar Pradesh. It would be the fifth largest country if it were its own country. And if it were its own country, you’d probably know about its human development challenges, because it would have the highest neonatal mortality rate of any country except for South Sudan and Pakistan. Forty percent of children there are stunted. Only two-thirds of women are literate. So Uttar Pradesh is a place where there are lots of health challenges.

And then even within that, we’re working

...
Continue reading