Ways the world is getting better
Click the banner to add a piece of good news

Posts tagged community

Quick takes

Show community
View more
FAQ: “Ways the world is getting better” banner The banner will only be visible on desktop. If you can't see it, try expanding your window.  How do I use the banner? 1. Click on an empty space to add an emoji,  2. Choose your emoji,  3. Write a one-sentence description of the good news you want to share,  4. Link an article or forum post that gives more information.  If you’d like to delete your entry, click the cross that appears when you hover over it. It will be deleted for everyone. What kind of stuff should I write? Anything that qualifies as good news relevant to the world's most important problems.  For example, Ben West’s recent quick takes (1, 2, 3). Avoid posting partisan political news, but the passage of relevant bills and policies is on topic.  Will my entry be anonymous? All submissions are anonymous, but usual moderation norms still apply (additionally, we may remove duplicates or borderline trollish submissions. This is an experiment, so we reserve the right to moderate heavily if necessary). Ask any other questions you have in the comments below.   
EAGxUtrecht (July 5-7) is now inviting applicants from the UK (alongside other Western European regions that don't currently have an upcoming EAGx).[1] Apply here! Ticket discounts are available and we have limited travel support. Utrecht is very easy to get to. You can fly/Eurostar to Amsterdam and then every 15 mins there's a direct train to Utrecht, which only takes 35 mins (and costs €10.20). 1. ^ Applicants from elsewhere are encouraged to apply but the bar for getting in is much higher.
What is the best practice for dealing with biased sources? For example, if I'm writing an article critical of EA and cite a claim made by emille torres, would it be misleading to not mention that they have an axe to grind?
In my latest post I talked about whether unaligned AIs would produce more or less utilitarian value than aligned AIs. To be honest, I'm still quite confused about why many people seem to disagree with the view I expressed, and I'm interested in engaging more to get a better understanding of their perspective. At the least, I thought I'd write a bit more about my thoughts here, and clarify my own views on the matter, in case anyone is interested in trying to understand my perspective. The core thesis that was trying to defend is the following view: My view: It is likely that by default, unaligned AIs—AIs that humans are likely to actually build if we do not completely solve key technical alignment problems—will produce comparable utilitarian value compared to humans, both directly (by being conscious themselves) and indirectly (via their impacts on the world). This is because unaligned AIs will likely both be conscious in a morally relevant sense, and they will likely share human moral concepts, since they will be trained on human data. Some people seem to merely disagree with my view that unaligned AIs are likely to be conscious in a morally relevant sense. And a few others have a semantic disagreement with me in which they define AI alignment in moral terms, rather than the ability to make an AI share the preferences of the AI's operator.  But beyond these two objections, which I feel I understand fairly well, there's also significant disagreement about other questions. Based on my discussions, I've attempted to distill the following counterargument to my thesis, which I fully acknowledge does not capture everyone's views on this subject: Perceived counter-argument: The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives. At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies. As a result, it is plausible that almost all value would be lost, from a utilitarian perspective, if AIs were unaligned with human preferences. Again, I'm not sure if this summary accurately represents what people believe. However, it's what some seem to be saying. I personally think this argument is weak. But I feel I've had trouble making my views very clear on this subject, so I thought I'd try one more time to explain where I'm coming from here. Let me respond to the two main parts of the argument in some amount of detail: (i) "The vast majority of utilitarian value in the future will come from agents with explicitly utilitarian preferences, rather than those who incidentally achieve utilitarian objectives." My response: I am skeptical of the notion that the bulk of future utilitarian value will originate from agents with explicitly utilitarian preferences. This clearly does not reflect our current world, where the primary sources of happiness and suffering are not the result of deliberate utilitarian planning. Moreover, I do not see compelling theoretical grounds to anticipate a major shift in this regard. I think the intuition behind the argument here is something like this: In the future, it will become possible to create "hedonium"—matter that is optimized to generate the maximum amount of utility or well-being. If hedonium can be created, it would likely be vastly more important than anything else in the universe in terms of its capacity to generate positive utilitarian value. The key assumption is that hedonium would primarily be created by agents who have at least some explicit utilitarian goals, even if those goals are fairly weak. Given the astronomical value that hedonium could potentially generate, even a tiny fraction of the universe's resources being dedicated to hedonium production could outweigh all other sources of happiness and suffering. Therefore, if unaligned AIs would be less likely to produce hedonium than aligned AIs (due to not having explicitly utilitarian goals), this would be a major reason to prefer aligned AI, even if unaligned AIs would otherwise generate comparable levels of value to aligned AIs in all other respects. If this is indeed the intuition driving the argument, I think it falls short for a straightforward reason. The creation of matter-optimized-for-happiness is more likely to be driven by the far more common motives of self-interest and concern for one's inner circle (friends, family, tribe, etc.) than by explicit utilitarian goals. If unaligned AIs are conscious, they would presumably have ample motives to optimize for positive states of consciousness, even if not for explicitly utilitarian reasons. In other words, agents optimizing for their own happiness, or the happiness of those they care about, seem likely to be the primary force behind the creation of hedonium-like structures. They may not frame it in utilitarian terms, but they will still be striving to maximize happiness and well-being for themselves and others they care about regardless. And it seems natural to assume that, with advanced technology, they would optimize pretty hard for their own happiness and well-being, just as a utilitarian might optimize hard for happiness when creating hedonium. In contrast to the number of agents optimizing for their own happiness, the number of agents explicitly motivated by utilitarian concerns is likely to be much smaller. Yet both forms of happiness will presumably be heavily optimized. So even if explicit utilitarians are more likely to pursue hedonium per se, their impact would likely be dwarfed by the efforts of the much larger group of agents driven by more personal motives for happiness-optimization. Since both groups would be optimizing for happiness, the fact that hedonium is similarly optimized for happiness doesn't seem to provide much reason to think that it would outweigh the utilitarian value of more mundane, and far more common, forms of utility-optimization. To be clear, I think it's totally possible that there's something about this argument that I'm missing here. And there are a lot of potential objections I'm skipping over here. But on a basic level, I mostly just lack the intuition that the thing we should care about, from a utilitarian perspective, is the existence of explicit utilitarians in the future, for the aforementioned reasons. The fact that our current world isn't well described by the idea that what matters most is the number of explicit utilitarians, strengthens my point here. (ii) "At present, only a small proportion of humanity holds partly utilitarian views. However, as unaligned AIs will differ from humans across numerous dimensions, it is plausible that they will possess negligible utilitarian impulses, in stark contrast to humanity's modest (but non-negligible) utilitarian tendencies." My response: Since only a small portion of humanity is explicitly utilitarian, the argument's own logic suggests that there is significant potential for AIs to be even more utilitarian than humans, given the relatively low bar set by humanity's limited utilitarian impulses. While I agree we shouldn't assume AIs will be more utilitarian than humans without specific reasons to believe so, it seems entirely plausible that factors like selection pressures for altruism could lead to this outcome. Indeed, commercial AIs seem to be selected to be nice and helpful to users, which (at least superficially) seems "more utilitarian" than the default (primarily selfish-oriented) impulses of most humans. The fact that humans are only slightly utilitarian should mean that even small forces could cause AIs to exceed human levels of utilitarianism. Moreover, as I've said previously, it's probable that unaligned AIs will possess morally relevant consciousness, at least in part due to the sophistication of their cognitive processes. They are also likely to absorb and reflect human moral concepts as a result of being trained on human-generated data. Crucially, I expect these traits to emerge even if the AIs do not share human preferences.  To see where I'm coming from, consider how humans routinely are "misaligned" with each other, in the sense of not sharing each other's preferences, and yet still share moral concepts and a common culture. For example, an employee can share moral concepts with their employer while having very different consumption preferences from them. This picture is pretty much how I think we should primarily think about unaligned AIs that are trained on human data, and shaped heavily by techniques like RLHF or DPO. Given these considerations, I find it unlikely that unaligned AIs would completely lack any utilitarian impulses whatsoever. However, I do agree that even a small risk of this outcome is worth taking seriously. I'm simply skeptical that such low-probability scenarios should be the primary factor in assessing the value of AI alignment research. Intuitively, I would expect the arguments for prioritizing alignment to be more clear-cut and compelling than "if we fail to align AIs, then there's a small chance that these unaligned AIs might have zero utilitarian value, so we should make sure AIs are aligned instead". If low probability scenarios are the strongest considerations in favor of alignment, that seems to undermine the robustness of the case for prioritizing this work. While it's appropriate to consider even low-probability risks when the stakes are high, I'm doubtful that small probabilities should be the dominant consideration in this context. I think the core reasons for focusing on alignment should probably be more straightforward and less reliant on complicated chains of logic than this type of argument suggests. In particular, as I've said before, I think it's quite reasonable to think that we should align AIs to humans for the sake of humans. In other words, I think it's perfectly reasonable to admit that solving AI alignment might be a great thing to ensure human flourishing in particular. But if you're a utilitarian, and not particularly attached to human preferences per se (i.e., you're non-speciesist), I don't think you should be highly confident that an unaligned AI-driven future would be much worse than an aligned one, from that perspective.
I've recently made an update to our Announcement on the future of Wytham Abbey, saying that since this announcement, we have decided that we will use some of the proceeds on Effective Venture's general costs.

Popular comments

Recent discussion

This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.

Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...

Continue reading

I think there's a lot of potential in regulatory reform, though I'm probably more optimistic about its prospects outside the US. E.g. I think DEFRA in the UK or the European Commission are more likely to make meaningful regulatory changes than the USDA.

My top priority US regulatory reform would be to get the USDA to interpret the Humane Methods of Slaughter Act to apply to birds too. Courts have held that its within the USDA's discretion to decide this, but decades of on-and-off advocacy by HSUS and AWI have failed to get them to do so. I do think it's wor... (read more)

2
Vasco Grilo
Interesting question, Aidan! Relatedly, I liked 80,000 Hours' podcast Cass Sunstein on how social change happens, and why it’s so often abrupt & unpredictable. One of the topics they discuss is whether the way the world treats farmed animals could abruptly change.
1
justsaying
Thanks for doing this AMA, Lewis! To steal from the suggested questions, what do you think is behind the decline in plant-based meat sales? And what do you think are some good strategies to build career capital in the animal welfare space? Relatedly, what areas in animal welfare change are more skill/talent constrained?

I am seeking suggestions for a path to pursue my driving mission - reducing as much suffering as possible for all sentient beings.

At 18, I had to drop out of college before completing one year due to the onset of severe depression, anxiety, and ME/CFS. This decade-long ...

Continue reading

I'm sorry to hear treatments generally haven't helped in the past. 

I sometimes find it useful to think about these things in the following way. It feels like a lot to sacrifice energy to do therapy when you're already limited in terms of energy. But if it works particularly well, maybe you'll have something like an extra day of energy a week for... well for your whole life. It might be worth doing even if it takes a lot now, and even if the odds of success are low. (Of course, in some cases the odds are so low that it isn't worth it).

I don't know much about the specifics here, my own experience has been with anxiety, depression and adhd.

titotal posted a Quick Take

What is the best practice for dealing with biased sources? For example, if I'm writing an article critical of EA and cite a claim made by emille torres, would it be misleading to not mention that they have an axe to grind?

Continue reading
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Anders Sandberg has written a “final report” released simultaneously with the announcement of FHI’s closure. The abstract and an excerpt follow.


Normally manifestos are written first, and then hopefully stimulate actors to implement their vision. This document is the reverse

...
Continue reading
4
Habryka
Yeah, I've considered this a bunch (especially after my upvote strength on LW went up to 10, which really limits the number of people in my reference class).  I think a whole multi-selection UI would be hard, but maybe having a user setting that you can change on your profile where you can set your upvote-strength to be any number between 1 and your current vote strength seems less convenient but much easier UI wise. It would require some kind of involved changes in the way votes are stored (since we currently have an invariant that guarantees you can recalculate any users karma from nothing but the vote table, and this would introduce a new dependency into that that would have some reasonably big performance implications).
4
Pablo
Alternatively, you could make the downvote button reduce votes by one if the vote count is positive, and vice versa. For example, after casting a +9 on a comment by strongly upvoting it, the user can reduce the vote strength to +7 by pressing the downvote button twice.

That's an interesting idea, I hadn't considered that!

tobytrem posted a Quick Take

FAQ: “Ways the world is getting better” banner

The banner will only be visible on desktop. If you can't see it, try expanding your window. 

How do I use the banner?

  1. Click on an empty space to add an emoji, 
  2. Choose your emoji, 
  3. Write a one-sentence description of the good news you want to share, 
  4. Link an article or forum post that gives more information. 

If you’d like to delete your entry, click the cross that appears when you hover over it. It will be deleted for everyone.

What kind of stuff should I write?

Anything that qualifies as good news relevant to the world's most important problems. 

For example, Ben West’s recent quick takes (123).

Avoid posting partisan political news, but the passage of relevant bills and policies is on topic. 

Will my entry be anonymous?

All submissions are anonymous, but usual moderation norms still apply (additionally, we may remove duplicates or borderline trollish submissions. This is an experiment, so we reserve the right to moderate heavily if necessary).

Ask any other questions you have in the comments below. 


 

Continue reading

I once read a similar nuance in one of Brian Kaplan's articles: If you are a utilitarian, you will create a society that favors neurotic people. If this problem doesn't need to be resolved, why? If I had to solve this problem, how would I solve it?

Continue reading

I assume the argument is that neurotic people suffer more when they don't get resources, so resources should go to more neurotic people first?

I think that's correct in an abstract sense but wrong in practice for at least two reasons:

  1. Utilitarianism says you should work on the biggest problems first. Right now the biggest problems are (roughly) global poverty, farm animal welfare, and x-risk.
  2. A policy of helping neurotic people encourages people to act more neurotic and even to make themselves more neurotic, which is net negative, and therefore bad according to utilitarianism. Properly-implemented utilitarianism needs to consider incentives.

There are two main areas of catastrophic or existential risk which have recently received significant attention; biorisk, from natural sources, biological accidents, and biological weapons, and artificial intelligence, from detrimental societal impacts of systems, incautious...

Continue reading
1
Sebastian Kreisel
Thanks for drawing this line between biorisk and AI risk. Somewhat related: I often draw parallels between threat models in cyber security and certain biosecurity questions such as DNA synthesis screening. After reading your write-up, these two seem much closer related than biorisk and AI risk and I'd say cyber security is often a helpful analogy for biosecurity in certain contexts. Sometimes biosecurity intersects directly with cyber security, that is when critical information is stored digitally (like DNA sequences of concern). Would be interested in your opinion.

I think there are useful analogies between specific aspects of bio, cyber, and AI risks, and it's certainly the case that when the biorisk is based on information security, it's very similar to cybersecurity, not the least in that it requires cybersecurity! And the same is true for AI risk; to the extent that there is a risk of model weights leaking, this is in part a cybersecurity issue.

So yes, I certainly agree that many of the dissimilarities with AI are not present if analogizing to cyber. However, more generally, I'm not sure cybersecurity is a good a... (read more)


This Rethink Priorities report provides a shallow overview of the potential for impactful opportunities from institutional plant-based meal campaigns in the US, France, Germany, UK, Spain, and Italy based on reviewing existing research and speaking with organizations conducting such campaigns. Shallow overviews are mainly intended as a quick low-confidence writeup for internal audiences and are not optimized for public consumption.The views expressed herein are not necessarily endorsed by the organizations who were interviewed.

Main takeaways from the report include:

Emphasize reducing all animal products in order to avoid substitution from beef & lamb to chicken, seafood, & eggs which require more animals to be harmed.

[Confidence: Medium-High. There are many examples of programs that have had this problem (Hughes, 2020, 2:12:50Gravert & Kurz 2021Lagasse &...

Continue reading

Shrimp Welfare Project (SWP) produced this report because we believe it could have significant informational value to the movement, rather than because we anticipate SWP directly working on a shrimp paste intervention in the future. We think a new project focused on shrimp...

Continue reading

FWIW, shrimp paste alternatives seem morally ambiguous and have a significant risk of backfiring.

  1. Shrimp paste alternatives would probably increase paste shrimp populations. If you think paste shrimp have overall bad lives naturally, then increasing their populations this way would be bad. If you're highly uncertain about this, then the effects on their population would be highly morally ambiguous.
  2. Shrimp paste alternatives could increase paste shrimp catch (if they're overfished; see my recent post).
  3. I don't know how common this is or will be, but I've also
... (read more)