web development, product design, and all things about startups and shipping fast
- Voluntary human challenge trials
- Run a real money prediction market for US citizens
- Random compliance stuff that startups don't always bother with: GDPR, purchased mailing lists, D&I training in california, ...
Here are some illegal (or gray-legal) things that I'd consider effectively altruistic though I predict no "EA" org will ever do:
- Produce medicine without a patent
- Pill-mill prescription-as-a-service for certain medications
- Embryo selection or human genome editing for intelligence
- Forge college degrees
- Sell organs
- Sex work earn-to-give
- Helping illegal immigration
A gripe I have with EA is that it is not radical enough. The american civil rights movement of 1950-1960s was very effective and altruistic, even though it's members were arrested, and it's leaders were wiretapped by the FBI and assassinated in suspicious ways. Or consider the stonewall riots.
More contemporarily, I think Uber is good for the world counterfactually. It's good that Nakamoto made bitcoin. It's good that Snowden leaked the NSA stuff. (probably, I'm less sure about the impact of these examples.)
Most crime is bad, and most altruistic crime is ineffective or counterproductive. But not all.
An underrated solution here is for the busy person to simply charge for their time. Some professionals already do this - my coworker recently paid a few hundred dollars for an hour of time from someone who built a successful social media app.
It can be as easy as turning on the Stripe integration on your Calendly.
Isn't this true for the provision of any
publicnon-excludable good? A faster road network, public science funding, or clean water benefit some people, firms, and industries more than others. And to the degree community-building resources can be discretized, ordinary market mechanics can distribute them, in which case they cease to be cause-general.On the other side of the argument, consider that any substantial difference in QALY / $ implies that
a QALY maximizer should favor giving $ to some causes over others, and this logic holds in general for [outcome you care about] / [resource you're able to allocate]. like if that resource is labor, attention, or eventspace-hours you rederive the issue laid out in the original post.