Dawn Drescher

Cofounder @ AI Safety GiveWiki
2478 karmaJoined Nov 2014Working (6-15 years)8303 Bassersdorf, Switzerland
givewiki.org

Bio

I’m working on impact markets – markets to trade nonexcludable goods. (My profile.)

I have a conversation menu and a Calendly for you to pick from! 

If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.

Pronouns: Ideally she or they. I also still go by Denis and Telofy in various venues.

How others can help me

GoodX needs: advisors/collaborators for marketing, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets. We're a PBC and seek SAFE investments over donations.

How I can help others

I’m happy to do calls, give feedback, or go bouldering together, also virtually. You can book me on Calendly.

Please check out my Conversation Menu!

Sequences
2

Impact Markets
Researchers Answering Questions

Comments
566

Yep, failing fast is nice! So you were just skeptical on priors because any one new thing is unlikely to succeed?

Yep, that makes a lot of sense. I've done donation forwarding for < 10 projects once, and it was already quite time-consuming!

I've (so far) read the first post and love it! But when I was working full-time on trying to improve grantmaking in EA (with GiveWiki, aggregating the wisdom of donors), you mostly advised me against it. (And I am actually mostly back to ETG now.) Was that because you weren't convinced by GiveWiki's approach to decentralizing grantmaking or because you saw little hope of it succeeding? Or something else? (I mean, please answer from your current perspective; no need to try to remember last summer.)

Oh, great! I interpreted “This is an offer to donate this amount to the project on the condition that it eventually becomes active” to mean that the project might not become active for any number of reasons, only one of them being the funding goal.

Oh, brilliant! USDC would also be my top choice. But I'm basically paying into a DAF, and so can't get a refund if this project doesn't succeed, right? That would have a high cost in option value since I don't know whether my second-best donation opportunity will be on Manifund. Is there a way to donate to Rethink Priorities or the Center on Long-Term Risk through Manifund? That would lower-bound the cost in option value.

From what I've learned about Shapley values so far, this seems to mirror my takeaway. 

 

Nice! To be sure, I want to put an emphasis on any kind of attribution being an unnecessary step in most cases rather than on the infeasibility of computing it.

There is complex cluelessness, nonlinearity from perturbations at perhaps even the molecular level, and a lot of moral uncertainty (because even though I think that evidential cooperation in large worlds can perhaps guide us toward solving ethics, that'll take enormous research efforts to actually make progress on), so infeasibility is already the bread and butter of EA. In the end we’ll find a way to 80/20 it (or maybe -80/20 it, as you point out, and we'll never know) to not end up paralyzed. I've many times just run through mental “simulations” of what I think would've happened if any subset of people on my team had not been around, so this 80/20ing is also possible for Shapley values.

If you do retroactive public goods funding, it's important that the collaborators can, up front, trust that the rewards they'll receive will be allocated justly, so being able to pay them out in proportion to the Shapley value would be great. But as altruists, we're only concerned with rewards to the point where we don't have to worry about our own finances anymore. What we really care about is the impact, and for that it's not relevant to calculate any attribution.

I do not understand this point but would like to (since the stance I developed in the original post went more in the direction of "EAs are too individualist").

I might be typical-minding EAs here (based on me and my friends) but my impression is that a lot of EAs are from lefty circles that are very optimistic about the ability of a whole civilization to cooperate and maximize some sort of well-being average. We've then just turned to neglectedness as our coordination mechanism rather than long, well-structured meetings, consensus voting, living together and other such classic coordination tools. In theory (or with flexible resources, dominant assurance contracts, and impact markets) that should work fine. Resources pour into campaigns that are deemed relatively neglected until they are not, at which point the resources can go to the new most neglected thing. Eventually nothing will be neglected anymore.

So it seems to me that the spirit is the same one of cooperativeness, community, and collective action. Just the tool we use to coordinate is a new one.

But some 99.9% (total guess) of the population are more individualist than that (well, I've only ever lived in WEIRD cultures, so I'm in an obvious bubble). They don't think in terms of civilizations thriving or succumbing to infighting but in terms of the standing of their family in society or even just their own. (I'm excluding people in poverty here – almost anyone, including most altruists, well behave selfishly when they are in dire straits.)

Shapley values are useful for startups or similar enterprises that have a set goal that everyone works toward. The degree to which they work toward it is a fixed attribute of the collaborator. The core is more about trying to find an attribution split that sets just the right incentives to maximize the number of people who are interested in collaborating in the first place. (I think I'm getting this backwards, but something of this sort. It's been too long since I research these things.) 

If someone is very community- and collective-action-minded, they'll have the tacit assumption that everyone is working towards the good of the whole community, and they're just wondering how they can best contribute to that. That's how I see most EAs.

If someone is very individualistic, they'll want to travel 10 countries, have 2 kids, drive a car that can accelerate real fast, and get their brain frozen when they die. They'll have no tacit assumptions about any kind of greater community or their civilization and never think about collective action. But if they did, their question would be what's in it for them, and if there is something in it for them, if they can conspire with a smaller set of collaborators to get more of it. They'll turn to cooperative game theory, crunch the numbers, and then pick out just the right co-conspiritors to form a subcoalition.

So that's the intuition behind that overly terse remark in my last message. ^.^

Off topic: I have a badly structured, hastily written post where I argue that it's not optimal for EAs to focus maximally on the one thing where they can contribute most (AI safety, animal rights, etc.) and neglect everything else but that it's probably better to cooperate with all other efforts in their immediate environment that they endorse to at least the extent to which the median person in the environment cooperates with them. Or else we're all (slightly) sabotaging each other all the time and get less change in aggregate. I feel like mainstream altruists (a small percentage of the population) do this better than some EAs, and it seems conceptually similar to individualism.

Very glad to read that, thank you for deciding to add that piece to your comment :)!

Awww! :-D

Great work; I hope you'll succeed with the fundraiser! Do you have blockchain-based donation options like, e.g., Rethink Priorities offers? Ideally one of the major Ethereum layer 2s or Solana? Ty!

Shapley values are a great tool for divvying up attribution in a way that feels intuitively just, but I think for prioritization they are usually an unnecessary complication. In most cases you can only guess what they might be because you can't mentally simulate the counterfactual worlds reliably, and your set of collaborators contains billions of potentially relevant actors. But as EAs we can “just” choose whatever action will bring about the world history with the greatest value regardless of any impact attribution to ourselves or anyone. 

I like the Shapley value and think it would make similar recommendations, but it adds another layer of infeasability (and arbitrariness) on top of an already infeasably complex optimization problem without adding any value.

Then again many of us are strongly motivated by “number go up,” so Shapley values are probably helpful for self-motivation. :-3

(I think if EAs were more individualist, “the core” from cooperative game theory would be more popular than the Shapley value.)

Oh, and we get so caught up in the object-level here that we tend to fail to give praise for great posts: Great work writing this up! When I saw it, it reminded me of Brian Tomasik's important article on the same topic, and sure enough, you linked it right before the intro! I'm always delighted when someone does their research so well that whatever random spontaneous associations I (as a random reader) have are already cited in the article!

I use “impact” to mean “net impact,” basically.

An output could be a piece of forest that is protected from logging. An outcome is some amount of CO2 converted into O2 that wouldn't otherwise. But also a different piece of forest getting logged that wouldn't otherwise. And a bunch of r-strategist animals dying of parasites, starvation, and predation who would otherwise not have been born. Some impact on the workers who now have to travel further to log trees. And much more.

The attempt to trade off all of these effects (perhaps using an open source repository of composeable probabilistic models like Squiggle) is what results in an impact estimate.

Load more