Hide table of contents

Most of the effective altruists, especially those who work in AI safety, major in Computer science or math. However, do you think those effective altruists who work in AI safety, should spend time learning multiple basic (university introduction course level) physics, chemistry, biology, social sciences...? I think mastering computer science knowledge doesn't require learning natural science. However, some argue regardless what you major in, you should study the basis of every subjects, those will be helpful for you someday (Such as Brian Tomasik's article stands for this opinion: Education matters for effective altruism). Thus, we are different than normal people who only aims to earn money, we want to do altruistic things(which is usually quite unique), so our needs of knowledge may be different than others. Do you think EA people should be a generalist, spend time learning such as General Physics, General Chemistry, General Biology...? Or we don't have to spend any time on subjects that are irrelevant to the issues we work in?

4

0
0

Reactions

0
0
New Answer
New Comment

2 Answers sorted by

(I hope you'll forgive me if this is a bit meandering.)

I've not yet read the book Range: Why Generalists Triumph in a Specialized World, but my vague understanding is that the general argument is about how exploring a wide range of fields is beneficial. I'm certainly biased, because I'm a person who is interested in a variety of different topics, so of course I'll love any argument saying that the way I naturally tend to do things is good/right/beneficial. Whether wide-ranging learning tends to have direct benefit is going to depend on the specific topics learned, but I do find that there are unexpected connections that are revealed only once you do some kind of cross-disciplinary study.

I strongly suspect that certain areas/subjects a more "transferable benefit rich" than others. As a silly example : I enjoy learning about history, but I've been able to use my elementary knowledge of social psychology and statistics in a much wider range of contexts than the various books I've read about the Opium Wars or about the Aztec perspective of the conquest of Mexico.

I also suspect that we can't really make a confident claim about how much a particular field will or won't contribute to another field if we haven't studied both. I assume that learning biology wouldn't contribute much to AI safety, but this ends up being an issue of "I don't see anything there, therefore I claim that nothing is there." So it is hard to claim which fields are 'worth' exploring if you haven't explored them yet. I vaguely remember reading something about a collaboration between professors of music and... something.[1]

So I guess my non-expert answer to your question would be something like "some Effective Altruists should learn a wide range of subjects, but not all of them. Some subjects should be encouraged for cross-disciplinary study more than others. There is benefit to specialization, just like there is benefit to being a jack of all trades, but not everyone should specialize."

  1. ^

    Out of curiosity, I asked ChatGPT about the most successful cross-disciplinary collaborations, and was told about:

    • the human genome project (genetics, biology, computer science)

    • climate research (atmospheric science, ecology, economics, sociology, and policy-making)

    • translational medicine (basic science, clinical research, and healthcare delivery)

    • smart cities (urban planners, architects, engineers, computer scientists, economists, and policymakers)

I guess that depends on what their end goal is, but tbh learning new things outside your specialism is usually very useful because it expands knowledge of how things work. If the end goal is to make impact with technical AI research I'd say yes - particularly social sciences (which you mention in your post). A basic understanding of social sciences such as sociology, law, and economics are very useful for technical AI research because that research inevitably interacts with and relies upon human systems as well as technical systems. Additionally, technical AI work in industry requires interaction with a lot of those departments so it helps to deepen your ability to bridge and pitch ideas or concepts.

Some may well disagree, but that's the nature of opinions.

 

Comments1
Sorted by Click to highlight new comments since:

A few questions that you might find helpful for thinking this through:

• What are your AI timelines?
• Even if you think AI will arrive by X, perhaps you'll target a timeline of Y-Z years because you think you're unlikely to be able to make a contribution by X
• What agendas are you most optimistic about? Do you think none of these are promising and what we need are outside ideas? What skills would you require to work on these agendas?
• Are you likely to be the kind of person who creates their own agenda or contributes to someone else's?
• How enthusiastic are you about these subjects? Are you likely to be any good at them? Many people make a contribution without using things outside of computer science, but sometimes it takes a person with outside knowledge to really push things forward to the next level.

Curated and popular this week
Relevant opportunities