AG

andrew_goldbaum

20 karmaJoined

Comments
9

I definitely think it’s a good idea for EA to expand the variety of academic disciplines of its members. I certainly think that the social sciences would benefit EA- for example, sociology could give us a framework of the social, cultural, and institutional relationships that underlie the problems found within developing countries. This could inform how we direct our resources. I also think that EAs may be blindsided to the idea that diversity increases a group’s collective intelligence because we assume that we already recruit the most talented people (e.g., highly educated people studying the most relevant subjects). Therefore, if we recruit the most talented people, then our epistemics is surely top-notch. This therefore excludes lots of people, especially those in poorer countries where education isn’t as easily accessible, and ways of thinking/knowing. 

I think this was a very interesting article due to just how many proxies can be used to gauge animal suffering (and that it’s possible to accurately gauge it at all), the fact that research into wild animal suffering is now being taken quite seriously, I didn’t know how many forward-thinking policies have been put in place to promote animal welfare (in particular, that cultured meat is already starting to be approved as safe, as well as parliament’s serious consideration of the welfare of smaller, supposedly less sentient animals like prawns and shrimp); and lastly, that it’s the smallest animals that experience the most amount of suffering relative to larger (supposedly more sentient) animals. 

I found nearly all of this very surprising, as a lot of my intuitive assumptions are violated as a result, and the animal welfare movement is much more robust than I initially thought. Now I definitely need to rethink my eating habits, as I am a pescetarian who only eats prawns (as opposed to other fish, beef, chicken, etc.) due to their seemingly negligible sentience. Finally, I didn’t think it’s possible that more moderate demands and more radical demands wouldn’t conflict with one another and ultimately hamper the animal welfare movement. I think maybe the opposite of what I intuitively thought is true because letting these ideas coexist means that the animal welfare movement doesn’t have to be as cautious; additionally, the more ideas that are out there, the more attention is drawn to the movement as a whole and the more people will get involved. Maybe there’s also the added effect that seeing a robust debate occur within the movement makes it, in a sense, more flexible and less morally rigid: only some people prescribe more radical approaches and choices, so it isn’t like the entire animal welfare movement is looking down on and judging the average person.


 

While the capability approach definitely has some upsides; such as that it measures wellbeing in terms of people’s positive freedom (rather than simply being not infringed upon by others, people are only “free” if they have meaningful opportunities available to them), one downside of this approach is that it still has similar problems to other utilitarian metrics if the goal is to maximise wellbeing. For example, even with regards to discrimination, if the people doing the discriminating gained more capabilities than were lost by those being discriminated against, then the discrimination would be justified. One would still need to have a harm cap that states that when any one person or group of people lose enough capabilities, then no such actions are justified even if there is a net increase in capabilities. 

Also, I think the problem associated with traditional methods of measuring wellbeing (e.g., happiness, SWB) where they don’t align with people’s priorities can be solved if the metric being measured is personal meaning: even if having children, believing in a religion, or viewing pieces of art that evoke sadness don’t necessarily maximise happiness, they can all facilitate feelings of subjective meaning in individuals. That being said, this still isn’t perfect, as the AI example could just be about using a different drug that maximises different neurotransmitters like serotonin or oxytocin rather than simply dopamine.


 

While I definitely think it’s correct that EA should distance itself from adopting any one moral philosophy and instead adopt a more pluralistic approach, it might still be useful to have a wing of the movement dedicated to moral philosophy. I don’t see why EA can’t be a haven for moral and political philosophers collaborating with other EA members to do the most good possible, as it might be worthwhile to focus on wide-scale systematic change and more abstract, fundamental questions such as what value is in the first place. In fact, one weakness of EA is precisely that it isn’t pluralistic in terms of the demographic of its members and how they view systematic change; for example, consider Tyler Cowen’s quote about EA’s demographic in the United States:


“But I think the demographics of the EA movement are essentially the US Democratic Party. And that's what the EA movement over time will evolve into. If you think the existential risk is this kind of funny, weird thing, it doesn't quite fit. Well, it will be kind of a branch of Democratic Party thinking that makes philanthropy a bit more global, a bit more effective. I wouldn't say it's a stupider version, but it's a less philosophical version that's a lot easier to sell to non-philosophers.” 

 

If wide-scale philosophical collaboration was incorporated into EA, then I think it might be a rare opportunity for political philosophers of all stripes (e.g., libertarians, socialists, anarchists, neoliberals, etc.) to collaborate on systematic questions relating to how to do the most good. I think this is especially needed considering how polarised politics has become. Additionally, considering abstract questions relating to the fundamental nature of value would particularly help with expected value calculations that are more vague, trying to compare the value of qualitatively distinct experiences.

This is something I actually agree with, not just in terms of movement-building, but as a wider moral philosophy. There is reason to think that utilitarianism is too demanding, for example, by demanding that everyone make every decision impartially (e.g., by giving the benefits/harms to family and friends the same priority as benefits/harms to strangers), or at the extremes stating that people ought to calculate every action in terms of how much good/harm it does to others. Both of these examples are impractical, ultimately leading to misery by not taking into account what makes human lives worth living (e.g., having committed relationships to a select number of people who one considers more valuable than strangers, or sometimes indulging in frivolities that may prevent one from being maximally altruistic). I think people often associate utilitarianism with consequentialism as a whole, which I think may be counterproductive. Sprinkling in some egoist practices here and there may be what ultimately leads to the most happiness and least harm in the long run, as diminishing the quality of one’s own life in the name of helping others, if universalised, would lead to an unhappy world (in this way, I think Kant’s Categorical Imperative may be useful here). 

Cowen thinks there are limits to EA’s idea that we should be completely impartial in our decisions (that we should weigh all human lives as being equal in value when we make decisions, to the point where we only care about how many lives we can impact and not where in the world those lives are). He cites a thought experiment where aliens come to Earth and want to enslave humankind for their benefit. We don’t calculate whether more net happiness is generated if the aliens get what they want: the vast majority of people would always choose to fight alongside their fellow humans (thus being partial).

Cowen then claims that some degree of partiality is an inescapable part of human psychology, so we ought not to strive to be completely impartial. Not only does this run into Hume’s is-ought problem, as he’s using (what he believes to be) an empirical fact to derive an ought, but this doesn’t get to the core reason of why  we ought to be partial in some situations. This matters because having a core principle would more clearly define what limits to our impartiality should be. 

For example, I think the notion of personal and collective responsibility is extremely important here for setting clear limits: I am partial to, say, my family over strangers because I have relationships with them that make me accountable to them over strangers. Governments need to be partial to the citizens of their country over the citizens of other countries because they are funded through taxes and voted in by citizens. 

Humans should fight on the side of humans in the war against aliens for two reasons: the first is that every human being is in a relationship with herself, making her responsible for not letting herself be enslaved. Secondly, one can include the idea of moral responsibility under the umbrella of personal and collective responsibility: even if only some humans are enslaved and there isn’t a personal benefit for most people to fight on the side of those humans, slavery is immoral, so we ought to fight for the rights and dignity of those people if there is something we can do about it. If a specific subset of humans engaged a whole race of aliens in battle (both sides were voluntarily engaged in the battle), and the winner didn’t enslave the loser, it would actually be wise to pick the side that would lead to the most net happiness, as mere tribalism is not a valid reason to be partial. 

If today's AI research is predominantly led by people with a tinkering and engineering background, does that mean that disciplines like theoretical neuroscience have less to say about AI than we currently think, or can more theoretical fields still inform the development of AI? For example, I know that neural networks are only loosely based on the brain and the idea of neural plasticity, but there may be reason to think that making AI even more similar to the brain can bring it closer to human-like intelligence (https://www.nature.com/articles/d41586-019-02212-4). If mathematical theory about the brain can inform the development of more cutting-edge AI algorithms, particularly unsupervised learning algorithms, wouldn't that contradict the notion that modern AI is the purview of engineering? As the article stated, a consequence of the guesswork that we do when choosing AI techniques and their underlying methods is that the inner workings of deep neural networks are often not transparent. Wouldn't it be up to more theoretical disciplines to decipher what is really going on under the hood?  

I am wondering if the fact that theories of consciousness relate more to the overall architecture of a system rather than to the low-level workings of that system is a limitation that should be strongly considered, particularly if there are other theories that are more focused on low-level cellular functioning. For example, I've seen a theory from renowned theoretical physicist Roger Penrose (video below) stating that consciousness is a quantum process. If this is the case, then current computers wouldn't be conscious because they aren't quantum devices at the level of individual transistors or circuits. Therefore, no matter what the overall neural architecture is, even in a complicated AI, the system wouldn't be conscious.

Another interesting point is that the way we incorporate AI into society may be affected by whether the AIs we build are generally sentient. If they are, and we have to respect their rights when developing, implementing, or shutting down such systems; that may create an incentive to do these things more slowly. I think slowing down may be good for the world at large given that plunging into the AI revolution while we still only have this black-box method of designing and training AIs - as in digital neuroscience hasn't progressed enough for us to decode the neural circuits of AIs so that we know what they're actually thinking when they make decisions (https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment)- seems like a very dangerous result.

I think the distinction between everyday people and EAs may be a harmful one in the realm of politics. We're assuming that EAs automatically have the authority to decide which information is "honest", "unbiased", or "high-quality". Couldn't someone in EA not only be biased in a specific direction, but also better than non-EAs at rationalising their biases? It may be best to have a think tank within EA to ensure that there is a specific subset of people actually willing to comb through political science research, find truly objective information, and distill it into something most people are willing to engage with.