T

tylermjohn

623 karmaJoined Oct 2014

Comments
53

I haven't tried this, but I'm excited about the idea! Effective Altruism as an idea seems unusually difficult to communicate faithfully, and creating a GPT that can be probed on various details and correct misconceptions seems like a great way to increase communication fidelity.

On your future directions / tentative reflections (with apologies that I haven't looked into your model, which is probably cool and valuable!):

To the extent that we think this is relevant for things like lock-in and x-risk prioritisation we need to also think that current trends are predictive of future trends. But it's not at all clear that they are once you take into account the possibility of explosive growth a la https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/. Moreover, worlds where there is explosive growth have way more moral patients, so if their probability is non-negligible they tend to dominate moral considerations.

Once we focus on explosive growth scenarios as the most important, I find much more persuasive considerations like these: https://www.effectivealtruism.org/articles/the-expected-value-of-extinction-risk-reduction-is-positive

I've written up decently extended reflections on why we shouldn't give much weight to the fact that the history and presency of our world is an utter moral hellscape that I'm happy to share privately if these questions are important for you.

(All that said, I do think lock-in is undervalued in longtermism and I'm excited to see more work on that, and I do think the path to x-risk prioritisation is much more complicated than many EAs think and that these kinds of considerations you point out are exactly why.)

I've only skimmed the essay but it looks pretty good! Many of the ideas I had in mind are covered here, and I respond very differently to this than to your post here.

I don't know what most EAs believe about ethics and metaethics, but I took this post to be about the truth, or desirability of these metaethical, ethical, and methodological positions, not whether they're better than what most EAs believe. And that's what I'm commenting on here.

Hi Spencer and Amber,

There's a pretty chunky literature on some of these issues in metaethics, e.g.:

  • Moral fictionalism, or why it could make sense to talk in terms of moral truths even if there aren't any
  • Moral antirealism/constructivism, or why there can be moral "shoulds" and "oughts" even if these are just mental attitudes
  • Why even if you're a pluralist, utilitarian considerations dominate your reasoning on a range of psychologically typical value systems given how much welfare matters to people compared to other things, and how much we can affect it given effective altruism
  • How there can be different ways of valuing things, some that you endorse and some that you don't (especially among constructivists like Street, Korsgaard, and Velleman), and why it could make sense to only act on values you endorse acting on
  • Relatedly, how your moral theory might be different from your spontaneous sentiments because you can think through these sentiments and bring them into harmony (e.g. the discussion of reflective equilibrium)

Obviously it would be a high bar to have to have a PhD level training on these topics and read through the whole literature before posting on the EA Forum, so I'm not suggesting that! But I think it would be useful to talk some of these ideas through with antirealist metaethicists because they have responses to a bunch of these criticisms. I know Spencer and I chatted about this once and probably we should chat again! I could also refer you to some other EA folks who would be good to chat to about this, probably over DM.

All of that said, I do think there are useful things about what you're doing here, especially e.g. part 2, and I do think that some antirealist utilitarianism is mistaken for broadly the reasons you say! And the philosophers definitely haven't gotten everything right, I actually think most metaethicists are confused. But especially claims like those made in part 3 have a lot of good responses in existing discussion.

ETA: Actually if you're still in NYC one person I'll nominate to chat with about this topic is Jeff Sebo.

I'd be pretty excited to see a new platform for retail donors giving to x-risk charities. For this, you'd want to have some x-risk opportunities that are highly scaleable (can do ≥ $10m p.a., will execute the project over years reliably without intervention or outside pressure), measurable (you can write out a legible, robust, well-quantifed theory of change from marginal dollars to x-risk), have a pretty smooth returns curve (so people can have decent confidence that their donations have the returns that they expect, whether they are a retail donors or a large donor). And then you could build out cost effectiveness models given different assumptions about values (e.g. time preference, population ethics) and extinction risk models that people might have, and offer retail donors a few different ways of thinking about and modeling the impacts of their donations (e.g. lives saved, micro- or picodooms).

I'd guess there are some bio interventions that fit these criteria. For AI safety there could be a crowd-funded compute cluster for safety research. This could be pretty boring and difficult to model robustly unless there was good reporting on the wins that came out of research using the cluster and further iteration on the model based on the track record.

Noting that I think that making substantive public comments on this draft (including positive comments about what it gets right) is one of the very best volunteer opportunities for EAs right now! I plan to send a comment on the draft before the deadline of 6 June.

Thanks Sam! I don't have much more to say about this right now since on a couple things we just have different impressions, but I did talk to someone at 80k last night about this. They basically said: some people need the advice Tyler gave, some people need the advice Sam gave. The best general advice is probably "apply broadly": apply to some EA jobs, to some high-impact jobs outside of EA, to some upskilling jobs, etc. And then pick the highest EV job you were accepted to (where EV is comprehensive and includes things like improvements to your future career from credentialing and upskilling).

Hi readers! I work as a Programme Officer at a longtermist organisation. (These views are my own and don't represent my employer!) I think there's some valuable advice in this post, especially about not being constrained too much by what you majored in. But after running several hiring rounds, I would frame my advice a bit differently. Working at a grantmaking organisation did change my views on the value of my time. But I also learned a bunch of other things, like:

  1. The majority of people who apply for EA jobs are not qualified for them.
  2. Junior EA talent is oversupplied, because of management constraints, top of funnel growth, and because EAs really want to work at EA organisations.
  3. The value that you bring to your organisation/to the world is directly proportional to your skills and your fit for the role.

Because of this, typically when I talk to junior EAs my advice is not to apply to lots more EA jobs but rather to find ways of skilling up — especially by working at a non-EA organisation that has excellent managers and invests in training its staff — so that one can build key skills that make one indispensable to EA organisations.

Here's a probably overly strong way of stating my view that might bring the point home: try to never apply to EA jobs, and instead get so good at something that EA orgs will headhunt you and fight over you.

I know that there are lots of nice things about working at EA organisations (culture, community, tangible feelings of impact) but if you really value work at EA organisations, then you should value highly skilled work at EA organisations even more (I think a lot more!). Having more junior EAs find ways to train up their skills and spend less time looking for EA work is the only way I can see to convert top of funnel community growth into healthy middle of funnel community growth.

I'm not sure if this fits your concept, but it might be helpful to have a guidebook that caters specifically to new EAs, to help give guidance to people excited about the ideas but unsure how to put them into practice in daily life, in order to convert top of funnel growth into healthy middle of funnel growth. This could maybe couple with a more general audience book that appeals to people who are antecedently interested in the ideas.

A couple things I'd like to see in this are the reasoning transparency stuff, guidance on going out and getting skills outside of the EA community to bring into the community, anti-burnout stuff, and various cultural elements that will help community health and epistemics.

Load more