JP Addison

4879 karmaJoined Feb 2017Working (6-15 years)Cambridge, MA, USA
jpaddison.net

Bio

Head of the CEA Online Team, which runs this Forum.

A bit about me, to help you get to know me: Prior to CEA, I was a data engineer at an aerospace startup. I got into EA through reading the entire archive of Slate Star Codex in 2015. I found EA naturally compelling, and donated to AMF, then GFI, before settling on my current cause prioritization of meta-EA, with AI x-risk as my object-level preference. I try to have a wholehearted approach to morality, rather than thinking of it as an obligation or opportunity. You see my LessWrong profile here.

I love this Forum a bunch. I've been working on it for 5 years as of this writing, and founded the EA Forum 2.0. (Remember 1.0?) I have an intellectual belief in it as an impactful project, but also a deep love for it as an open platform where anyone can come participate in the project of effective altruism. We're open 24/7, anywhere there is an internet connection.

In my personal life, I hang out in the Boston EA and Gaymer communities, enjoy houseplants, table tennis, and playing coop games with my partner, who has more karma than me.

Comments
652

Topic contributions
17

FYI thanks for all the helpful comments here — I promptly got covid and haven't had a chance to respond 😅

This is a really nice idea, thanks!

Here’s a puzzle I’ve thought about a few times recently:

The impact of an activity () is due to two factors,  and . Those factors combine multiplicatively to produce impact. Examples include:

  • The funding of an organization and the people working at the org
  • A manager of a team who acts as a lever on the work of their reports
  • The EA Forum acts as a lever on top of the efforts of the authors
  • A product manager joins a team of engineers

Let’s assume in all of these scenarios that you are only one of the players in the situation, and you can only control your own actions.

From a counterfactual analysis, if you can increase your contribution by 10%, then you increase the impact by 10%, end of story.

From a Shapley Value perspective, it’s a bit more complicated, but we can start with a prior that you split your impact evenly with the other players.

Both these perspectives have a lot going for them! The counterfactual analysis has important correspondences to reality. If you do 10% better at your job the world gets  better. Shapley Values prevent the scenario where the multiplicative impact causes the involved agents to collectively contribute too much.

I notice myself feeling relatively more philosophically comfortable running with the Shapely Value analysis in the scenario where I feel aligned with the other players in the game. And potentially the Shapley Value approach downsides go down if I actually run the math (Fake edit: I ran a really hacky guess as to how I’d calculate this using this calculator and it wasn’t that helpful).

But I don’t feel 100% bought-in to the Shapley Value approach, and think there’s a value in paying attention to the counterfactuals. My unprincipled compromise approach would be to take some weighted geometric mean and call it a day.

Interested in comments.

I think it's a pretty important distinction that "EA" is a question which has no CEO, while the Centre for Effective Altruism does. I recommend changing the title here.

I agree with you, and so does our issue tracker. Sadly, it does seem a bit hard. Tagging @peterhartree as a person who might be able to tell me that it's less hard than I think.

I worked with Sam for 4 years and would recommend the experience. He's an absolute blast to talk tech with, and a great human.

Answer by JP AddisonFeb 27, 202423
11
0

Maybe a report from someone with a strong network in the silicon valley scene about how AI safety's reputation is evolving post-OAI-board-stuff. (I'm sure there are lots of takes that exist, and I guess I'd be curious for either a data driven approach or a post which tries to take a levelheaded survey of different archetypes.)

I'm not sure if this qualifies, but the Creative Writing Contest featured some really moving stories.

I have a spotify playlist of songs that seemed to rhyme with EA to me.

Some good kabbalistic significance to our issue tracker, but I'm not sure how.

First, a note: I have heard recommendations to try to lower the number of issues. I've never understood it except as a way to pretend like you don't have bugs. For sure some of those issues are stale and out of date, but quite a few are probably live but ultimately very edge-case and unimportant bugs, or feature requests we probably won't get to but could be good. I don't think it's a good use of time to prune it, and most of the approaches I've seen companies take is to auto-close old bugs, which strikes me as disingenuous.

In any case, we have a fairly normal process of setting OKRs for our larger projects, and tiny features / bugfixes get triaged into a backlog that we look at when planning our weekly sprints. The triage process done in our asana and is intentionally not visible publicly so we can feel comfortable marking something as low priority without worrying about needing to argue about it.

Thanks for the report. We currently do the second, which isn't ideal to be sure. If someone redrafts and republishes after a post has been up for a while, an admin will have to adjust the published date manually. This happens surprisingly infrequently relative to what I might've expected, so we haven't prioritized improving that.

Load more