SB

Sam Battis

168 karmaJoined Sep 2022proponentforsentience.wordpress.com/

Bio

I blog about political/economic theory and moral philosophy.

Comments
31

I agree that activism in particular has a lot of idiosyncrasies, even within the broader field of systems change, that make it harder to model or understand but do not invalidate its worth. I think that it is worthwhile to attempt to better understand the realms of activism or systems change in general, and to do so, EA methodology would need to be comfortable engaging in much looser expected value calculations than it normally does. Particularly, I think a separate system from ITN may be preferable for this context, because "scope, neglectedness, and tractability" may be less useful for the purpose of deciding what kind of activism to do than other concepts like "momentum, potential scope, likely impact of a movement at maximum scope and likely impact at minimum or median scope/success, personal skill/knowledge fit, personal belief alignment" etc.

I think it's worth attempting to do these sorts of napkin calculations and invent frameworks for things in the category of "things that don't usually meet the minimum quantifiability bar for EA" as a thought exercise to clarify one's beliefs if nothing else, but besides, regardless of whether moderately rigorous investigation endorses the efficacy of various systems change mechanisms or not, it seems straightforwardly good to develop tools that help those interested in systems change to maximize their positive impact. Even if the EA movement itself remained less focused on systems change, I think people in EA are capable of producing accurate and insightful literature/research on the huge and extremely important fields of public policy and social change, and those contributions may be taken up by other groups, hopefully raising the sanity waterline on the meta-decision of which movements to invest time and effort into. After all, there are literally millions of activist groups and systems-change-focused movements out there, and developing tools to make sense out of that primordial muck could aid many people in their search to engage with the most impactful and fulfilling movements possible.

We may never know whether highly-quantifiable non-systems change interventions or harder-to-quantify systems change interventions are more effective, but it seems possible that to develop an effectiveness methodology for both spheres is better than to restrict one's contributions to one. For example, spreading good ideas in the other sphere may boost the general influence of a group's set of ideals and methodologies, and also provide benefits in the form of cross-pollination from advances in the other sphere. If EA maximizes for peak highly-quantifiable action, ought there to be a subgroup that maximizes for peak implementation of "everything that doesn't make the typical minimum quantifiability bar for EA"?

I agree that to the extent that EA engages in policy evaluation or political/economic evaluation more generally, it should use a sentient-experience maximization framework, while discarding the labels of particular political theories, in the way that you described. And I think that so far every discussion I've seen of those matters in EA follows that framework, which is great.

With regard to specific arguments about post-politics:

I thought you made a strong case for post-politics in general, but arguing that a specific economic strategy is the best possible beyond all doubt is much more difficult to defend, and besides, does not seem very post-political. In general, a post-political person might argue for any economic strategy under the sun as optimal for sentient beings, though of course some arguments will be stronger than others.

Also, regardless of the systems they believe to be optimal, post-political people should be sure to entertain the possibility that they, others, or a given group or polity are actually too post-political--not having the optimal amount or type of cultural orthodoxy/dogma and being unwilling to address that issue.

This may come into play when an individual or group's conviction in rights and other deontological tools becomes too weak, or the deontological rules and norms they follow are worse than competing options.

After all, an orthodoxy or political culture beyond post-politics is necessary for "tie-breaking" or producing decisions in situations where calculation is inconclusive. Some political culture beyond post-politics will inevitably sway people in such situations, and it is worth making sure that that political culture is a good one.

An individual post-political thinker therefore may embrace advocacy of a certain political culture because they think it is so valuable and under-utilized that advocating for it is a more efficient use of their resources than advocating for post-politics.

Generally I would say most people and institutions could stand to be more post-political, but I am not sure whether post-politics is currently a better advocacy target than other cultural/political movements.

If one was to advocate for such a movement, I'd guess the best way would be to create a political forum based on those principles and try to attract the people who like participating in political forums. Then the goal would be to make sure the discourse is focused on doing the most good for the most people, with rigorous evidence-based breakdowns for particular policies. This might be a decent use of time given that this post-political approach could improve thousands of people's decisions as they relate to systems change.

If something like this was created, I would recommend adding a system for users to catalogue, quantify, and compare their political positions, including their degree of confidence in each position. The capability to quickly and easily compare political positions between individuals seems like a very fast way to advance the accuracy of individuals' beliefs, especially in a community dedicated to strong beliefs lightly held and finding the policies that do the most good.

Yeah honestly I don't think there is a single true deontologist on Earth. To say anything is good or addresses the good, including deontology, one must define the "good" aimed at.

I think personal/direct situations entail a slew of complicating factors that a utilitarian should consider. As a response to that uncertainty, it is often rational to lean on intuition. And, thus, it is bad to undermine that intuition habitually.

"Directness" inherently means higher level of physical/emotional involvement, different (likely closer to home) social landscape and stakes, etc. So constructing an "all else being equal" scenario is impossible.

Related to initial deontologist point: when your average person expresses a "directness matters" view, it is very likely they are expressing concern for these considerations, rather than actually having a diehard deontologist view (even if they use language that suggests that).

2min coherent view there: the likely flowthrough of not saving a child right in front of you to your psychological wellbeing, community, and future social functioning, especially compared to the counterfactual, are drastically worse than not donating enough to save two children on average, and the powerful intuition one could expect to feel in such a situation, saying that you should save the child, is so strong that to numb or ignore it is likely to damage the strength of that moral intuition or compass, which could be wildly imprudent. In essence:

-psychological and flow-through effects of helping those in proximity to you are likely undervalued in extreme situations where you are the only one capable of mitigating the problem

-effects of community flow-through effects in developed countries regarding altruistic social acts in general may be undervalued, especially if they uniquely foster one's own well-being or moral character through exercise of a "moral muscle"

-it is imprudent to ignore strong moral intuition, especially in emergency scenarios, and it is important to Make a Habit of not ignoring strong intuition (unless further reflection leads to the natural modification/dissipation of that intuition)

To me, naive application of utilitarianism often leads to underestimating these considerations.

Hi Brad,

The counterfactual is definitely something that I think I should examine in more detail.

Agreed that marginal effect would be fairly logarithmic and I probably should have considered the fact that there is quite a lot of competition for employment at Earthjustice (i.e. need to be top 0.001% of lawyers to have counterfactual impact). 

I am pretty completely convinced by the argument that seeking to work for Earthjustice is worse than ETG actually, so I might go and make some rather sweeping modifications to the post.

I think that the exercise does at least stand as a demonstration of the potential impact of systems change nonprofits with new/neglected focus and that Earthjustice is a success story in this realm.

Do you have a high level of confidence that Earthjustice is too large/established for it to compete with funding new and/or neglected projects?

Hi Jason, thanks for the response.

Agree that marginal increases have lower impact. I assume GiveWell-style research on the inner workings of the organization would be needed to see if funding efficacy is actually currently comparable to AMF, and I don't presume to have that level of know-how. I'm just hoping to bring more attention to this area.

What tools are used to assess likely funging? Is a large deficit as % of operating costs a sign that funging would be relatively low, or are most organizations that don't have the explicit goal of continuing to scale assumed to have very high funging costs of say 50% or higher?

Other species are instrumentally very useful to humans, providing ecosystem functions, food, and sources of material (including genetic material). 

On the AI side, it seems possible that a powerful misaligned AGI would find ecosystems and/or biological materials valuable, or that it would be cheaper to use humans for some tasks than machines. I think these factors would raise the odds that some humans (or human-adjacent engineered beings) survive in worlds dominated by such an AGI.

I think it is potentially difficult to determine how good the average doctor is in a particular place and how much better one would be than the average, but if one could be reasonably confident that they could make a large counterfactual impact on patient outcomes, the impact could be significant. The easiest way to be sure of these factors that I can think of would be to go somewhere with a well-documented shortage of good doctors, while trying to learn about and emulate the attributes of good doctors.

Being a doctor may not be one of the highest impact career paths on Earth, but it might be the highest impact and/or the most fulfilling for a particular person. High impact and personal fit/fulfillment are fairly highly correlated, I think, and it's worth exploring a variety of career options in an efficient way while making those decisions. In my experience, it can be very difficult to know what one's best path is, but the things that have helped me the most so far are experiences that let me get a taste for the day-to-day in a role, as well as talking to people who are already established in one's prospective paths.

EA should add systems change as a cause area - Macaskill or Ord v. [Someone with a view of history that favors systems change more who's been on 80k hours].

From hazy memory of their episodes it seems like Ian Morris, Mushtaq Khan, Christopher Brown, or Bear Braumoeller might espouse this type of view.

True. I think they meant that it's plausible humans would convert the entire population of cows into spare parts, instead of just the ones that have reached a certain age or state, if it served human needs better for cows to not exist.

Load more