Hide table of contents

Summary 

Software engineering could be a great option for having a direct impact on the world’s most pressing problems, particularly in AI safety, but also in biosecurity and across other cause areas. This will probably be more impactful than earning to give.

As with operations staff, organisations need exceptional and mission-aligned software engineers. But many still find it difficult to hire.

Some myths:

  • You need an ML background to work as an engineer on AI safety.
  • Outside AI safety, the only useful software skill is front-end web development.
  • Effective organisations will pay far less than top companies.

None of these things are true.

In fact, many organisations have budgets of $10s of millions, and think that software engineers can substantially increase their cost-effectiveness (e.g. in his 80,000 Hours podcast, Chris Olah argues that Anthropic's systems researcher could easily increase their efficiency by at least 10%). So even if you're earning 7 figures, you could be more effective doing direct impact work.

This rest of this post contains an excerpt from my new career review of software engineering for 80,000 Hours, focusing on the parts most relevant to already-engaged EAs. 

This review owes a lot to helpful discussions with (and comments from) Andy Jones, Ozzie Gooen, Jeff Kaufman, Sasha Cooper, Ben Kuhn, Nova DasSarma, Kamal Ndousse, Ethan Alley, Ben West, Ben Mann, Tom Conerly, Zac Hatfield-Dodds, and George McGowan. Special thanks go to Roman Duda for our previous review of software engineering, on which this was based.

Why might software engineering be high impact?

Software engineers are in a position to meaningfully contribute directly to solving a wide variety of the world’s most pressing problems.

In particular, there is a shortage of software engineers at the cutting edge of research into AI safety.

We’ve also found that software engineers can contribute greatly to work aiming at preventing pandemics and other global catastrophic biological risks.

Aside from direct work on these crucial problems, while working for startups or larger tech companies you can gain excellent career capital (especially technical skills), and, if you choose, earn and donate substantial amounts to the world’s best charities.

How to do good as a software engineer

Even for skilled engineers who could command high salaries, we think that working directly on a problem will probably be more impactful than earning to give.

Some examples of projects where software engineering is central to their impactful work:

Most organisations, even ones that don’t focus on developing large software products, need software engineers to manage computer systems, apps, and websites. For example:

Many people we’ve spoken to at these and other organisations have said that they have real difficulty hiring extremely talented software engineers. Many nonprofits want to hire people who believe in their missions (just as they do with operations staff), which indicates that talented, altruistic-minded software engineers are sorely needed and could do huge amounts of good.

Smaller organisations that don’t focus on engineering often only have one or two software engineers. And because things at small organisations can change rapidly, they need unusually adaptable and flexible people who are able to maintain software with very little help from the wider team.1

It seems likely that, as the community of people working on helping future generations grows, there will be more opportunities for practical software development efforts to help. This means that even if you don’t currently have any experience with programming, it could be valuable to begin developing expertise in software engineering now.

Software engineers can help with AI safety

We’ve argued before that artificial intelligence could have a deeply transformative impact on our society. There are huge opportunities associated with this ongoing transformation, but also extreme risks — potentially even threatening humanity’s survival.

With the rise of machine learning, and the huge success of deep learning models like GPT-3, many experts now think it’s reasonably likely that our current machine learning methods could be used to create transformative artificial intelligence.

This has led to an explosion in empirical AI safety research, where teams work directly with deep neural networks to identify risks and develop frameworks for mitigating them. Examples of organisations working in empirical AI safety research include Redwood Research, DeepMind, OpenAI, and Anthropic.

These organisations are doing research directly with extremely large neural networks, which means each experiment can cost millions of dollars to run. This means that even small improvements to the efficiency of each experiment can be hugely beneficial.

There’s also often overlap between experimental results that will help further AI safety and results that could accelerate the development of unsafe AI, so it’s also important that the results of these experiments are kept secure.

As a result, it’s likely to remain incredibly valuable to have talented engineers working on ensuring that these experiments are as efficient and safe as possible. Experts we spoke to expect this to remain a key bottleneck in AI safety research for many years.

However, there is a serious risk associated with this route: it seems possible for engineers to accidentally increase risks from AI by generally accelerating the technical development of the field. We’re not sure of the more precise contours of this risk (e.g. exactly what kinds of projects you should avoid), but think it’s important to watch out for. That said, there are many more junior non-safety roles out there than roles focused specifically on safety, and experts we’ve spoken to expect that most non-safety projects aren’t likely to be causing harm. If you’re uncertain about taking a job for this reason, our team may be able to help you decide.

Software engineer salaries mean you can earn to give

In general, if you can find a job you can do well, you’ll have a bigger impact working on a problem directly than you would by earning money and donating. However, earning to give can still be a high-impact option, especially if you focus on donating to the most effective projects that could use the extra funds.

If you’re skilled enough to work at top companies, software engineering is a well-paid career. In the US, entry-level software engineer salaries start at around $110,000. Engineers at Microsoft start at $150,000, and engineers at Google start at around $180,000 (including stock and bonuses). If you’re successful, after a few years on the job you could be earning over $500,000 a year.

Pay is generally much lower in other countries. Median salaries in Australia are around 20% lower than salaries in the US (approximately US$80,000), and around 40% lower in the UK, Germany, Canada, and Japan (approximately US$60,000). While much of your earnings as a software engineer come from bonuses and equity, rather than just your salary, these are also lower outside the US.

If you do want to make a positive difference through donating part of your income as a software engineer, you may be able to increase your impact by using donation-matching programmes, which are common at large tech companies (although these are often capped at around US$10,000 per year).

You can read more about salaries at large tech companies below.

It’s important to note that many nonprofit organisations, including those focusing on AI safety, will offer salaries and benefits that compete with those at for-profit firms.

If you work at or found a startup, your earnings will be highly variable. However, the expected value of your earnings — especially as a cofounder — could be extremely high. For this reason, if you’re a particularly good fit, founding a tech startup and donating your earnings could be hugely impactful, as you could earn and donate extraordinary amounts.
 

Moving to a direct impact software engineering role

Working in AI safety

If you are looking to work in an engineering role in an AI safety or other research organisation, you will probably want to focus on back-end software development (although there are also front-end roles, particularly those focusing on gathering data from humans on which models can be trained and tested). There are recurring opportunities for software engineers with a range of technical skills (to see examples, take a look at our job board).

If you have the opportunity to choose areas in which you could gain expertise, the experienced engineers we spoke to suggested focusing on:

  • Distributed systems
  • Numerical systems
  • Security

In general, it helps to have expertise in any specific, hard-to-find skillsets.

This work uses a range of programming languages, including Python, Rust, C++ and JavaScript. Functional languages such as Haskell are also common.

We’ve previously written about how to move into a machine learning career for AI safety. We now think it is easier than we previously thought to move into an AI-safety-related software engineering role without explicit machine learning experience.

The Effective Altruism Long-Term Future Fund and the Survival and Flourishing Fund may provide funding for promising individuals to learn skills relevant to helping future generations, including new technologies such as machine learning. If you already have software engineering experience, but would benefit from explicit machine learning or AI safety experience, this could be a good option for you.

If you think you could, with a few weeks’ work, write a new feature or fix a bug in a major machine learning library, then you could probably apply directly for engineering roles at top AI safety labs (such as Redwood Research, DeepMind, OpenAI, and Anthropic), without needing to spend more time building experience in software engineering. These top labs offer pay that is comparable to pay at large tech firms.

If you are considering joining an AI safety lab in the near future, our team may be able to help.

Working on reducing global catastrophic biological risks

Reducing global catastrophic biological risks — for example, research into screening for novel pathogens to prevent future pandemics — is likely to be one of the most important ways to help solve the world’s most pressing problems.

Through organisations like Telis Bioscience and SecureDNA (and other projects that might be founded in the future), there are significant opportunities for software engineers to contribute to reducing these risks.

Anyone with a good understanding of how to build software can be useful in these small organisations, even if they don’t have much experience. However, if you want to work in this space, you’ll need to be comfortable getting your hands dirty and doing whatever needs to be done, even when the work isn’t the most intellectually challenging. For this reason, it could be particularly useful to have experience working in a software-based startup.

Much of the work in biosecurity is related to handling and processing large amounts of data, so knowledge of how to work with distributed systems is in demand. Expertise in adjacent fields such as data science could also be helpful.

There is also a big focus on security, particularly at organisations like SecureDNA.

Most code in biosecurity is written in Python.

If you’re interested in working on biosecurity and pandemic preparedness as a software engineer, you can find open positions on our job board.

Other important direct work

Nonprofit organisations and altruistic-minded startups often have very few team members. And no matter what an organisation does, they almost always have some need for engineers (for example, 80,000 Hours is not a software organisation, but we employ two developers). So if you find an organisation you think is doing something really useful, working as a software engineer for them might be an excellent way to support that work.

Engineering for a small organisation likely means doing work across the development process, since there are few other engineers.

Often these organisations are focused on front-end development, with jobs ranging from application development and web development to data science and project management roles. There are often also opportunities for full-stack developers with a broad range of experience.

Founding an organisation yourself is more challenging, but can be even more impactful. And if you’ve worked in a small organisation or a startup before, you might have the broad skills and entrepreneurialism that’s required to succeed. See our profile on founding new high-impact projects for more.

Reasons not to go into software engineering

We think that most people with good general intelligence will be able to do well at software engineering. And because it’s very easy to test out (see the section on how to predict your fit in advance), you’ll be able to tell early on whether you’re likely to be a good fit.

However, there are lots of other paths that seem like particularly promising ways to help solve the world’s most pressing problems, and it’s worth looking into them. If you find programming difficult, or unenjoyable, your personal fit for other career paths may be higher. And even if you enjoy it and you’re good at it, we think that will be true for lots of people, so that’s not a good reason to think you won’t be even better at something else!

As a result, it’s important to test your fit for a variety of options. Try taking a look at our other career reviews to find out more.

You can read the full review here.

Comments19
Sorted by Click to highlight new comments since:

This isn't completely directly related, but this is a comment given in the EA subreddit to this post. This comment was well upvoted.

This person appears to be a staff engineer and hiring manager at Google, and has worked over 10 years there. 

I struggle with this, at least as framed. The problem, right now, is not the lack of qualified causes. The problem is talent-cause-org alignment.

I work for FAANG as a staff software engineer, am EtG and, at the risk of sounding arrogant, am—most would say—objectively good at my job. If I wanted to leave to work on an important cause area, there are so many barriers to that happening.

  • There's a great deal of science and energy that has to go into running any organization that many smaller, early stage orgs don't have expertise in. Running HR, managing people, and understanding how to foster and grow a successful human capital is a really, really hard problem. Every time I interact with folks in EA working at these companies, I see systematic organization management failures at every level.
  • Many early-stage orgs are unstable with real risk of failure. To accept an offer at one, I would want to interview the founders as much as they would want to interview me. I would want at least 5 hours with a founder. Of course, the problem is that there's 100s of applicants for any role and there's a scaling problem: they cannot give me that kind of time. And so, I would fundamentally lack the information that I would need to make a confident decision to walk away from the golden handcuffs.
  • At least in the US, our government is dysfunctional. I suspect that Social Security will not exist for part of my retirement and our safety net is particularly bad. Any employer must offer me retirement security though retirement investment account funding. The fear of ending up in a warehouse retirement home getting no medical care is very real.

There is a lot going on in this comment. I think the content is important. It talks about institutional competence, culture fit with founders, operations (which might be underrated and this comment one reason—talent can smell org competence) and economic security, that seem to apply to rare talent. 

Counterintuitively, I see the underlying issues as tending to favoring EA. I don't want to write a giant manifesto about it unless there is demand.
 

It's worth noting that many of these restrictions (especially the first and third) would apply not only to working at EA nonprofits but also, e.g., tech startups or a political campaign as well.

Of course, the problem is that there's 100s of applicants for any role and there's a scaling problem: they cannot give me that kind of time. And so, I would fundamentally lack the information that I would need to make a confident decision to walk away from the golden handcuffs.

This problem seems much more doable. I imagine many early-stage nonprofit CEOs would be willing to spend 5 hours chatting with top people who they made an offer to, though probably not early on in the process. 

In general there's a "vibe" of the comment that I somewhat disagree with, something in the general vein of "morality ought to be really convenient, and other people should figure that out."

Hi Linch, we met and talked for awhile at the SF Picnic last year. This was my Reddit comment. I'll reply here even though this is my first time interacting on this forum. I have lurked here for a long time but felt like the conversations were too time intensive to get involved. So, I'll try to keep this brief.

It's worth noting that many of these restrictions (especially the first and third) would apply not only to working at EA nonprofits but also, e.g., tech startups or a political campaign as well.

Yes, this is true. While I didn't say it in this comment, I do believe EA orgs have a competitive edge of also offering more-meaningfully-certain employment, which is new, and Big Tech doesn't really have a way to counter that. Among the set of {cause, startup, politics}, FAANG compensation is effective at keeping us from leaving for these riskier things. Sometimes an exec will counter a senior engineer who is thinking of leaving with an offer to work on a project that is more values-aligned but, generally, Big Tech has a strategy of paying top-of-market and, if that fails, making  it easy to come back if the outside gig fails[1].

This problem seems much more doable. I imagine many early-stage nonprofit CEOs would be willing to spend 5 hours chatting with top people who they made an offer to, though probably not early on in the process. 

Agree. It seems like there could be an EA-aligned startup that is cultivating talent connections and sharing that pool and facilitating those conversations among all EA orgs. Not just for software engineers but for all roles; HR is a big problem facing new companies—talent there is hard to find, too.

As a starting point, there could be some folks offering presentations to spread the expertise around. As an example, I did an hour-long training on Forming and training a distributed InfoSec team during the pandemic in November to the Infosec in EA Facebook group. There at topics like these that every EA org could benefit from.

In general there's a "vibe" of the comment that I somewhat disagree with, something in the general vein of "morality ought to be really convenient, and other people should figure that out."

Well, to be starkly transparent about my own biases and mental framing: the message that EA sends is that there are effective charities and, if one only gives away enough money as EtG, one can live a moral life. This is intoxicating and it's a sanguine trap because of that: when I wrote my first annual check, the feels were real. And the feels keep coming, year after year.

An EA startup has to overcome the feels that EtG offers to attract top talent. (I say this to make the observation that this is a market reality; not to virtue signal.)

To close, allow me to state a straw-person risk analysis for a top-of-market tech employee (based on my own intuitions, not reproducible data):

  • Stay at FAANG
    • 40-60% chance that a top performer will be able to EtG >$1M/yr during the final 10 years of their career.
    • >95% chance that the person will be able to EtG $>100k/yr during the final 10 years of their career.
    • ~70% chance of retiring at 50. Then, the person can direct their attention to EA orgs for the last 10-20 working years of their career for free/little comp. 
  • Join an early EA startup
    • ~20% chance that the startup survives its first 5 years of existence and is effective, depending on cause area. People-problems are the reasons that startups fail, not bad ideas.
    • Depending on how early and in what capacity one contributes, it's hard to guess at the possible impact one might have in the final moral calculus of this organization's altruistic contributions, if successful, when measured against something like EtG donations to AMF. The wide range of outcomes could far outpace EtG, yes. And, importantly, enable other EtG-ers to find new places to sink altruistic capital. What are the odds? Unknown.
  1. ^

    The leave-and return-in-2-years-on-failure path has lifetime earnings opportunity cost of ~$2M via lost equity and career trajectory.

Hi Jason, 

Do you mind humoring a few follow up questions?
 

Firstly, it seems like the skills of a staff engineer at your current role might be different from what many new EA or AI safety orgs need. For example, you probably handle a lot of design and dependencies currently. Writing, communication is probably a lot of your value. In contrast, in a small, new organization, you probably need to knock out a lot of smaller systems quickly. Your skills and the edge you have might be different between these orgs.

  • Does the above sound accurate to you, or is it misleading?

 

Secondly, retirement security was an important point to you. I didn’t fully understand this. My guess is that in your personal case, the security you could provide for yourself and your partner might be large compared to what an EA org (or really most jobs) could easily provide. So my read was that you were talking about more junior engineers and SWE outside of FAANG.

  • Is this guess wrong? For example, in the US / Cali, maybe I'm really ignorant and you need a large 7 figure nest egg to be safe.
  • Finally, in terms of financial security, would income stability, such as guaranteeing transitional income if an organization needed to shutdown, substitute for very large income? The idea here is that everyone trusts you, and you were funded to move to successful organizations, so you didn't have to stay on or bail out zombie organizations.

 

HR is a big problem facing new companies—talent there is hard to find, too.

Could you write a little more here to make this more legible? Like, is there a book or blog post you can share?

To give context, it's not clear to me what you mean by HR needs. Do you mean basic operational tasks involved in HR? I know there's a recruiting org that might be pretty sophisticated, maybe that is what you mean? Maybe I'm really ignorant, but in many tech companies, modulo of the recruiters, it seemed that both talent attraction and team functionality is entirely up to management (e.g. the manager or skip of your "two pizza team").  HR was involved in only pretty fundamental processes, like scheduling interviews or paying out checks (and many of these were contracted out). I knew a few director/VP at a FAANG who said they didn't understand HR and literally said they provided documentation for terminations.

To be clear, the above could be really dysfunctional/disrespectful and betray my ignorance. 

 

Context/Subtext for my questions: 

  • I thought your comment was more selfless than it seemed, because I didn't think you were talking about yourself. I think you were talking about the choices other software engineers  face.
  • Another point is that the skill match might suggest EtG, at least for your particular case.

Firstly, it seems like the skills of a staff engineer at your current role might be different from what many new EA or AI safety orgs need. For example, you probably handle a lot of design and dependencies currently. Writing, communication is probably a lot of your value. In contrast, in a small, new organization, you probably need to knock out a lot of smaller systems quickly. Your skills and the edge you have might be different between these orgs.

Yes, I agree but with a huge caveat: every person will progress through various stages of competency during their career. While many early-stage folks could contribute just as well at an early stage EA startup (and should consider it), in the context of the 80k Hours article that I was replying to, we need to be transparent with folks about what a typical career path looks like and what tradeoffs there are to consider down each of the startup vs EtG paths. Here's the typical career progression for software engineers (though it's general enough to map onto other fields).

  1. New grad/early stage: needing direction from others on what to work on, executing that work.
  2. Leading self: proposing work in the context of larger goals and then executing that work
  3. Leading a small team: proposing work and technical direction for a small team of engineers. Major design, some direct contributions, work-stream shepherding, mentorship.
  4. Leading a large, ambiguous area/leading multiple teams: proposing strategic direction shifts, aligning team leaders, building consensus without authority, major design work, little direct contributions, mentorship + cultivation at scale.
  5. Leading the entire technical direction for a business function: everything of the previous role, except heavily influencing all of the non-tech functions in the organization.
  6. A business executive[1]

Regarding which level of progression that individuals might achieve by the end of their career, there's a bell curve distribution around the 3rd step. Only a handful will ever reach the 6th step of being an executive[2]. FAANG pays somewhere between $750k-$1.5M for step 5, though, and—while still rarefied—it's attainable for top talent, so a possible EtG goal to plan for.

All of this is a long-winded way of saying that CS folks who are about to graduate shouldn't throw away a job offer from FAANG for an EA startup, out-of-hand, if they think that they have career luck in their favor. It would be a hard call. If I were 22 and about to graduate today[3], I would give an EA startup 3-5 years to be successful before I switched tactics and tried for a FAANG or other top-of-market option.

Secondly, retirement security was an important point to you. I didn’t fully understand this. My guess is that in your personal case, the security you could provide for yourself and your partner might be large compared to what an EA org (or really most jobs) could easily provide. So my read was that you were talking about more junior engineers and SWE outside of FAANG.

Right: speaking about general SWE population considering private-sector versus non-profits which tend to pay less and also tend to provide less benefits like retirement account funding.

  • Is this guess wrong? For example, in the US / Cali, maybe I'm really ignorant and you need a large 7 figure nest egg to be safe.

In US/Cali Bay Area (where some EA startups are based), the median house price is $1.3M. So, for someone looking to put down roots in the Bay Area and retire within the same friend network close by, a nest egg of $2M isn't an unreasonable guess; $3-4m if their spouse isn't working and they start a family. If we expect EA-ers to come work for a startup in the Bay Area and then move to a lower cost of living place later, we should be transparent about that. (Or, we should be encouraging EA orgs to go remote-first to unlock paying top-of-market rates in rural areas.)

  • Finally, in terms of financial security, would income stability, such as guaranteeing transitional income if an organization needed to shutdown, substitute for very large income? The idea here is that everyone trusts you, and you were funded to move to successful organizations, so you didn't have to stay on or bail out zombie organizations.

Yes, that would make the job offers of EAs more attractive to new-career and mid-career folks. It's probably also applicable to all other roles that an EA would hire for.

HR is a big problem facing new companies—talent there is hard to find, too.

Could you write a little more here to make this more legible? Like, is there a book or blog post you can share?

To give context, it's not clear to me what you mean by HR needs. Do you mean basic operational tasks involved in HR? Maybe I'm really ignorant, but in many tech companies, it seemed to me like both talent attraction and team functionality is entirely up to management (e.g. the manager or skip of your "two pizza team"). HR was involved in only pretty fundamental processes, like scheduling interviews or paying out checks (and many of these were sub contracted). In fact, I knew a few director/VP at a FAANG who said they didn't really understand HR and literally said they provided documentation for terminations.

To be clear, the above could be really dysfunctional/disrespectful and betray my ignorance. 

It's common it tech to hear the sentiment of your social network that HR provides no value so I'm not surprised to see this. In Silicon Valley, there's similar discounting of the value provided by folks in operations, support, logistics and finance.

A note on horizontal organization roles: there are types of roles that apply horizontal, cultural influence. For example, a wise person once said, "If you want to understand why an organization behaves the way that it does, look at the incentives of the people in that organization."

I point to HR, specifically, because it's an area where I've seen the most struggles in small-stage startups precisely because it is a horizontal force multiplier. Here's some values that a functioning HR organization provides to a small stage startup:

  • A meritocratic system of promotion/career advancement that's seen as fair by the employees. This includes transparency of the expected roles and responsibilities at each stage of career progression. Of course, this is includes some objective criteria for deciding to fire someone, and all of the legal implications thereof (as mentioned above), but that's not the most important part. Retention is partially a function of aligning the hedonic treadmill with real career progress possibilities.
  • Setting norms on how individuals interact and, theoretically, backstopping those norms with enforcement. For example, an org might say that aggressive behavior in meetings and emails is not tolerated. This is just a theoretical rule unless org leaders actually back up those words with actions through the promotion process and, in extreme cases, HR-backed disciplinary action. It's also the function of HR to repeat the company's behavioral expectations, periodically.
  • Ensuring fair hiring practices is non-trivial. It's common in startups to hand-waive over this problem. But actually objectively evaluating candidates and ensuring that bias doesn't creep in and that pay is equal among all similar roles and levels is hard. Radical transparency can help here but it doesn't just magically fix the problem.
  • Setting organizational goals against which the org is measured is sometimes seen as operations or Product Management but there's an HR role there too: leaders of those sub-orgs that set those goals need to be held accountable and any exec/leadership compensation should be tied to business outcomes in a way that lower-level employees do not face. And all of those company benchmarks and the feedback cycle need to be done in front of the whole employee population, quarterly.
  • Assessing employee satisfaction and collecting feedback anonymously on an ongoing basis. This can be as simple as an anonymous Google Form that's open for two weeks once a year. But, actually collating the data, slicing it by org, trending over time, and proposing cultural changes to address employee feed back is hard.
  • Benefits benefits benefits. This is a constantly evolving space. To some extent, this can be outsourced, but there should be someone on staff continually evaluating the changing landscape of offerings and competitor offerings and continually updating the employees about those changes and acting as a partner to fix problems when they come up.

I could go on but these are the ones that came to mind while I was writing this, and I think that I've exceeded the amount of time that I intended to spend on this. 😉

 

  1. ^

    I acknowledge that this assumes a fully meritocratic progression; there are indeed many reasons that individuals might be given these roles without being qualified.

  2. ^

    I recognize that founders of startups are not necessarily destined to lead 1,000 employee organizations but that they do need some mix of all of the skills in these stacks. And this is often why startups fail.

  3. ^

    Full disclosure: I do not have a degree and am an anomaly. So, I can't really speak with authenticity on this hypothetical.

Thanks so much! I thought this was incredibly informative and in-depth. There are some really valuable insights here.

Like you, I also think it's not-obvious that someone of Jason's skillsets should be doing something other than earning-to-give. As a practical matter, most EA employers aren't willing to pay >1M/year, except maybe in a few niche situations in ML/AI Safety or very successful EA fintech startups.

(I do vaguely think that if you just naively crunch the numbers about how great SWEs can contribute now vs. Open Phil's last dollar, there probably exists opportunities somewhat above this bar however, including outside of AI Safety and weird crypto stuff). 

There's also a strong argument that someone like Jason has a clear comparative advantage relative to the rest of the movement in doing E2G stuff in BigTech, and maybe as a community this coordination problem is better solved with less career shuffles.

Ozzie's post on opportunity costs of technical talent is related.

Hi Jason,

Thank you so much for your detailed engagement with my somewhat blunt + rude comment, and for the density of your honest comments! I will try to make a more substantive comment later, but I want to say I really appreciate your comments and for your hard work earning-to-give. 

I do remember meeting you last summer and I thought our conversation was quite good.

In general there's a "vibe" of the comment that I somewhat disagree with, something in the general vein of "morality ought to be really convenient, and other people should figure that out."

A moral saint might suffer arbitrary inconveniences to have an impact. But most real people won't. 

A better framing is that Jason is a "customer" of the EA talent pipeline, and before telling him that his desires are a "bug", we should try really hard to give him more of what he wants!

Thanks, this is a good reframing. :)

Two other paths that might be unusually impactful for EA software engineers:

  1. Joining a for-profit startup with a) EA founders,  b) a small engineering team, and c) where engineering appears critical to the organization's success.
    1. The "EA founder" part of this isn't strictly necessary, but this makes it much more likely that a great software engineer can create a lot of value through generating future donations (if we expect much of the equity to be captured by founders and investors).
    2. Note that because EA isn't very funding-constrained, your bar for this might be very high, like ~$millions/year in expectation per new engineer.
    3. Personally, I think FTX, Alameda Research, and Lantern Ventures are over this bar.
      1. (COI disclaimer: I have multiple personal financial ties here, including via the FTX EA fellowship)
    4. I would not be surprised if several others exist in the space that I'm not personally aware of.
  2. Ladder climbing within AI labs at large tech companies.
    1. I think it's plausibly really good to have EA engineers in positions that'd be close to the AI leadership at top tech companies in 5-15 years.
    2. Note that I think most software folks who are in a position to do so should not do this, partially for reasons outlined here. The people who are best at it are a) comparatively good at climbing prestige ladders and b) agenty enough to spot great opportunities for impact and c) act upon them.
    3. This also has obvious downside risks (e.g. via speeding up AI timelines slightly).
    4. And also I'm more than a bit worried about motivated cognition causing people to overrate the value (and/or be wrong about the sign) of the direct impact of working technically interesting and cushy BigTech jobs.
    5. Still, right now this appears to be a very underrated path, and the case for it at least naively seems reasonable.

Two things I'd like to note from my corner of EA:

  • I would add just how underrepresented frontend / full-stack still-sets are in EA relative to what I was used to in Silicon Valley. If you have those skill sets, you should realize how valuable that is.
  • CEA has an expression of interest open for a full stack engineer.

TL;DR: Even though 80k talk about personal fit: In practice EA software developers neglect their personal fit within the domain of software, which is a concept I recommend adding.

 

Example very common things that happen:

  1. People pick a domain (for example, Software Engineering vs ML) based ONLY on the domain's expected impact or on what this person already has experience with, with no consideration of how fun/exciting this domain is for them. I think this is bad.
  2. Someone doesn't enjoy their job, but this is due to "having a bad boss" or "having nobody to learn from", and not due to having a bad personal fit to software engineering in general

You said "gain a really deep understanding of the basics" - I wouldn't put this in an EA Software Career guide without serious disclaimers

TL;DR: I think EAs spend too much time "learning the basics" rather than "doing something productive and scrappy", and it is a bad idea to push them more towards the "learning the basics" side.

I have a ton to say about this and I've deleted several too-long-drafts already, but feel free to ask/disagree of course.

A similar example: I wouldn't tell the EA community that they've got to write even longer documents. ;) 

Just as I'm trying to tell myself to not make this comment even longer. This is really hard. Ok sending!

However, there is a serious risk associated with this route: it seems possible for engineers to accidentally increase risks from AI by generally accelerating the technical development of the field. We’re not sure of the more precise contours of this risk (e.g. exactly what kinds of projects you should avoid), but think it’s important to watch out for. That said, there are many more junior non-safety roles out there than roles focused specifically on safety, and experts we’ve spoken to expect that most non-safety projects aren’t likely to be causing harm.

I found this a bit hard to follow, especially given the focus in the previous paragraphs on safety work specifically. It reads to me like it's making the counterintuitive claim that "safety" work is actually where much of the danger lies. Is that intended?

That's not the intention, thanks for pointing this out!

To clarify, by "route", I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, it's important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).

ensuring that these experiments are as efficient and safe as possible

Is "safe" here meant in the sense of "not accelerating risks from AI," or in the sense of "difficult to steal" (i.e. secure)?

A bit of both - but you're right, I primarily meant "secure" (as I expect this is where engineers have something specific to contribute).

I can think of a few other areas of direct impact which could particularly benefit from talented software engineers:

Improving climate models is a potential route for high impact on climate change, there are computational modelling initiatives such as the Climate Modeling Alliance and startups such as Cervest. It would also be valuable to contribute to open source computational tools such as the Julia programming language and certain Python libraries etc.

There is also the area of computer simulations for organisational / government decision making, such as Improbable Defence (disclosure: I am a former employee and current shareholder), Simudyne  and Hash.ai. I've heard anecdotally that a few employees of Hash.ai are sympathetic to EA, but I don't have first hand evidence of this. 

More broadly there are many areas of academic research, not just AI safety, which could benefit from more research software engineers. The Society of Research Software Engineering aims to provide a community for research engineers and to make this a more established career path. This type of work in academia tends to pay significantly lower than private sector software salaries, so this is worse for ETG, but on the flip side this is an argument for it being a relatively neglected opportunity.

Curated and popular this week
Relevant opportunities